Daniel's Tech Blog

Cloud Computing, Cloud Native & Kubernetes

How to restore a container image from an Azure Kubernetes Service node to an Azure Container Registry?

Imagine a specific version of your container image used for your application has been deleted from your Azure Container Registry. It cannot be restored for whatever reason through your CI/CD pipeline, and you still need this version. How can you restore that specific version when you still have a running pod on one of the nodes in your Azure Kubernetes Service cluster?

Azure Container Registry - Empty repository

That will be the topic of today’s blog post. First of all, let us set the stage for some specifics in this scenario.

The Azure Container Registry does not have the soft-delete feature enabled, which is still in preview according to the documentation, and SSH on the nodes is disabled.

Fortunately, we discovered by using the kubectl krew plugin images that pods on one of our Azure Kubernetes Service clusters are still running with the specific container image version we need to restore. By running kubectl get pods -o wide, we identify the node.

❯ kubectl images
[Summary]: 1 namespaces, 3 pods, 9 containers and 2 different images
+----------------------------+-------------------------+------------------------------------+
|            Pod             |        Container        |               Image                |
+----------------------------+-------------------------+------------------------------------+
| go-webapp-6d75f5dc64-7m6dd | go-webapp               | azstcr.azurecr.io/go-webapp:latest |
+                            +-------------------------+------------------------------------+
|                            | (init) istio-validation | docker.io/istio/proxyv2:1.25.0     |
+                            +-------------------------+                                    +
|                            | (init) istio-proxy      |                                    |
+----------------------------+-------------------------+------------------------------------+
| go-webapp-6d75f5dc64-cj95g | go-webapp               | azstcr.azurecr.io/go-webapp:latest |
+                            +-------------------------+------------------------------------+
|                            | (init) istio-validation | docker.io/istio/proxyv2:1.25.0     |
+                            +-------------------------+                                    +
|                            | (init) istio-proxy      |                                    |
+----------------------------+-------------------------+------------------------------------+
| go-webapp-6d75f5dc64-hf97p | go-webapp               | azstcr.azurecr.io/go-webapp:latest |
+                            +-------------------------+------------------------------------+
|                            | (init) istio-validation | docker.io/istio/proxyv2:1.25.0     |
+                            +-------------------------+                                    +
|                            | (init) istio-proxy      |                                    |
+----------------------------+-------------------------+------------------------------------+

❯ kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP             NODE                              NOMINATED NODE   READINESS GATES
go-webapp-6d75f5dc64-7m6dd   2/2     Running   0          3m24s   100.64.0.138   aks-default-13458874-vmss00001l   <none>           <none>
go-webapp-6d75f5dc64-cj95g   2/2     Running   0          3m23s   100.64.1.7     aks-default-13458874-vmss00001m   <none>           <none>
go-webapp-6d75f5dc64-hf97p   2/2     Running   0          11d     100.64.2.18    aks-default-13458874-vmss00001n   <none>           <none>

Azure VMSS instances of the AKS cluster

As we cannot use SSH to connect to the node, we only have two options left. The first one is using kubectl debug and then chroot /host as outlined in the Azure Kubernetes Service documentation.

-> https://learn.microsoft.com/en-us/azure/aks/node-access?WT.mc_id=AZ-MVP-5000119

The second one uses the Azure CLI with the run-command option to execute single commands or shell scripts on a Virtual Machine Scale Set instance. We focus today on the second option in this blog post.

❯ az vmss run-command invoke --help

Command
    az vmss run-command invoke : Execute a specific run command on a Virtual Machine Scale Set
    instance.
...

Azure Kubernetes Service nodes use containerd as container runtime interface. Hence, the familiar Docker CLI is not available, and we must use the containerd CLI ctr to achieve our goal.

To restore the container image, we execute the following two ctr commands.

ctr -n k8s.io images list
ctr -n k8s.io images push

With the first one, we do a due diligence check to see if the container image still exists on the node.

❯ az vmss run-command invoke --resource-group rg-aks-azst-1-nodes --name aks-default-13458874-vmss --instance-id 59 --command-id RunShellScript \
  --scripts 'ctr -n k8s.io images list | grep azstcr.azurecr.io/go-webapp:latest' | jq -r '.value[0].message | @text'

Enable succeeded:
[stdout]
azstcr.azurecr.io/go-webapp:latest                                                                                                  application/vnd.docker.distribution.manifest.v2+json      sha256:01500895b5ee7c14d00002e59312c1c1eba50705c7904338aa8123e99f94e5b0 2.0 MiB   linux/amd64                                                                     io.cri-containerd.image=managed

[stderr]

Once we run the second command to push the container image to the Azure Container Registry successfully, we should see the image again.

❯ az vmss run-command invoke --resource-group rg-aks-azst-1-nodes --name aks-default-13458874-vmss --instance-id 59 --command-id RunShellScript \
  --scripts 'ctr -n k8s.io images push -u <ACR_USERNAME>:<ACR_PASSWORD> azstcr.azurecr.io/go-webapp:latest' | jq -r '.value[0].message | @text'

Enable succeeded:
[stdout]
   └──config (53d6704353bd)                     waiting         |--------------------------------------|                                                                                                           application/vnd.docker.distribution.manifest.v2+json sha256:01500895b5ee7c14d00002e59312c1c1eba50705c7904338aa8123e99f94e5b0
Pushing to OCI Registry (azstcr.azurecr.io/go-webapp:latest)    elapsed: 0.0 s  total:   0.0 B  (0.0 B/s)
azstcr.azurecr.io/go webapp:latest              pushing content
└──manifest (01500895b5ee)                      waiting         |--------------------------------------|
   ├──layer (ccab4a799a2a)                      waiting         |--------------------------------------|
   ├──config (53d6704353bd)                     waiting         |--------------------------------------|
   └──layer (18de6d7263b8)                      waiting         |--------------------------------------|
application/vnd.docker.distribution.manifest.v2+json sha256:01500895b5ee7c14d00002e59312c1c1eba50705c7904338aa8123e99f94e5b0
Pushing to OCI Registry (azstcr.azurecr.io/go-webapp:latest)    elapsed: 0.0 s  total:   0.0 B  (0.0 B/s)
azstcr.azurecr.io/go webapp:latest              pushing content
└──manifest (01500895b5ee)                      waiting         |--------------------------------------|
   ├──layer (ccab4a799a2a)                      waiting         |--------------------------------------|
   ├──config (53d6704353bd)                     already exists
   └──layer (18de6d7263b8)                      waiting         |--------------------------------------|
application/vnd.docker.distribution.manifest.v2+json sha256:01500895b5ee7c14d00002e59312c1c1eba50705c7904338aa8123e99f94e5b0
Pushing to OCI Registry (azstcr.azurecr.io/go-webapp:latest)    elapsed: 0.1 s  total:  1.8 Ki  (30.0 KiB/s)
azstcr.azurecr.io/go webapp:latest              pushing content
└──manifest (01500895b5ee)                      waiting         |--------------------------------------|
   ├──layer (ccab4a799a2a)                      waiting         |--------------------------------------|
   ├──config (53d6704353bd)                     already exists
   └──layer (18de6d7263b8)                      already exists
application/vnd.docker.distribution.manifest.v2+json sha256:01500895b5ee7c14d00002e59312c1c1eba50705c7904338aa8123e99f94e5b0
Pushing to OCI Registry (azstcr.azurecr.io/go-webapp:latest)    elapsed: 0.1 s  total:  2.0 Ki  (29.4 KiB/s)
azstcr.azurecr.io/go webapp:latest              pushing content
└──manifest (01500895b5ee)                      waiting         |--------------------------------------|
   ├──layer (ccab4a799a2a)                      already exists
   ├──config (53d6704353bd)                     already exists
   └──layer (18de6d7263b8)                      already exists
application/vnd.docker.distribution.manifest.v2+json sha256:01500895b5ee7c14d00002e59312c1c1eba50705c7904338aa8123e99f94e5b0
Pushing to OCI Registry (azstcr.azurecr.io/go-webapp:latest)    elapsed: 0.1 s  total:  2.0 Mi  (25.2 MiB/s)
azstcr.azurecr.io/go webapp:latest              pushed content
└──manifest (01500895b5ee)                      waiting         |--------------------------------------|
   ├──layer (ccab4a799a2a)                      already exists
   ├──config (53d6704353bd)                     already exists
   └──layer (18de6d7263b8)                      already exists
application/vnd.docker.distribution.manifest.v2+json sha256:01500895b5ee7c14d00002e59312c1c1eba50705c7904338aa8123e99f94e5b0
Pushing to OCI Registry (azstcr.azurecr.io/go-webapp:latest)    elapsed: 0.1 s  total:  2.0 Mi  (17.3 MiB/s)
azstcr.azurecr.io/go webapp:latest              pushed content
└──manifest (01500895b5ee)                      already exists
   ├──layer (ccab4a799a2a)                      already exists
   ├──config (53d6704353bd)                     already exists
   └──layer (18de6d7263b8)                      already exists
application/vnd.docker.distribution.manifest.v2+json sha256:01500895b5ee7c14d00002e59312c1c1eba50705c7904338aa8123e99f94e5b0
Completed push to OCI Registry (azstcr.azurecr.io/go-webapp:latest)     elapsed: 0.1 s  total:  2.0 Mi  (17.3 MiB/s)

[stderr]

Azure Container Registry with restored image

Mission accomplished. We restored the container image from an Azure Kubernetes Service node to an Azure Container Registry.


Posted

in

WordPress Cookie Notice by Real Cookie Banner