Daniel's Tech Blog

Cloud Computing, Cloud Native & Kubernetes

SSH access to AKS nodes for troubleshooting purposes

Under normal circumstances you do not need SSH access to your AKS nodes. Even when you create a new AKS cluster you do not have to provide an admin username and a public SSH key.

Deployment method admin username required? public SSH key required?
Azure portal No, cannot be set No, cannot be set
Azure CLI Optional Optional
ARM templates Required Required
Terraform Required Required

Why you would like to get SSH access to your AKS nodes? Troubleshooting purposes is the answer. Especially when you need access to the kubelet logs.

Assuming you have provided an admin username and a public SSH key during the AKS cluster deployment we can directly move towards to the SSH access procedure.

In the case you did not provide values for both options have a look at Azure docs.

-> https://docs.microsoft.com/en-us/azure/aks/ssh#add-your-public-ssh-key

The SSH access procedure is well described in the AKS section on Azure docs.

-> https://docs.microsoft.com/en-us/azure/aks/ssh#create-the-ssh-connection

But a Debian container image is used which is much larger than an Alpine container image. I slightly modified the kubectl command to the following one which spins up a pod on the AKS cluster.

kubectl run -it --rm --generator=run-pod/v1 aks-ssh --image=alpine --labels=app=aksssh

I am executing the command in one of my two bash terminals and I get directly connected to the prompt.

aksssh01

Next, I am launching the following script in my second bash terminal.

#!/bin/bash

kubectl exec aks-ssh -c aks-ssh -- apk update
kubectl exec aks-ssh -c aks-ssh -- apk add openssh-client bash
kubectl cp ~/.ssh/id_rsa aks-ssh:/id_rsa
kubectl exec aks-ssh -c aks-ssh chmod 0600 id_rsa

The script installs the SSH component on the pod and copies the SSH private key into it. Afterwards the pod is ready to be used as an SSH jump box for the AKS nodes.

In the second bash terminal we should run kubectl get nodes -o json | jq .items[].status.addresses[].address to get the name and IP addresses of the AKS nodes. Then we can log in to the specific AKS node from the pod via the following command.

ssh -i id_rsa username@ipaddress

aksssh02

On the AKS node we can query the kubelet logs.

journalctl -u kubelet -o cat

aksssh03

Or we save them into a txt file and copy the file via the jump box pod onto the local workstation.

#Run on AKS node
journalctl -u kubelet > kubelet-logs.txt

#Run on SSH pod
scp -i id_rsa username@ipaddress:~/kubelet-logs.txt ./kubelet-logs.txt

#Run on second bash terminal
kubectl cp aks-ssh:kubelet-logs.txt ~/kubelet-logs.txt

Now you know when to access AKS nodes via SSH and how to achieve it. You can find my scripts for the SSH access on my GitHub repo.

-> https://github.com/neumanndaniel/kubernetes/tree/master/ssh

WordPress Cookie Notice by Real Cookie Banner