Every few months we walk into a new engagement and find a Deployment that looks like this:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: api
image: myorg/api:latest
imagePullPolicy: Always
It works on Tuesday. On Saturday it doesn't, and nobody can explain why.
What :latest actually means
It means nothing. :latest is just a string — a label that the registry happens to point at whichever image was pushed most recently. Kubernetes treats it as opaque. The control plane has no idea that "latest today" and "latest yesterday" are different bytes on disk.
That breaks three things you rely on:
- Rollback.
kubectl rollout undorewinds the Deployment spec, but the spec still saysmyorg/api:latest. The previous image is whatever the registry calls "latest" right now — which is the broken one you just shipped. - Replica consistency. When a node restarts and re-pulls, it gets today's latest. Your fleet now runs two versions of the same Deployment.
- Audit trail. Six months later, when you're trying to reconstruct what version was running during an incident, the Deployment YAML tells you nothing.
Tags aren't enough. Pin digests.
Switching from :latest to :v1.4.2 is better, but tags are still mutable in most registries. Anyone with push access can re-tag v1.4.2 to point at different bytes. The only thing the registry guarantees as immutable is the image digest:
image: myorg/api@sha256:3f7e1b9c5d2a8f4e6b1d0c9a8f7e6d5c4b3a2918f7e6d5c4b3a291807f6e5d4c
That digest is content-addressed. It will pull the same bytes today, next year, and after the registry's storage backend has been migrated twice. kubectl rollout undo now actually rolls back to the previous image. Crashloops don't mysteriously fix themselves on restart.
How to make this automatic
Nobody is going to copy SHA256 strings by hand. Make the pipeline do it.
If you're using GitHub Actions, the build step already returns the digest:
- id: build
uses: docker/build-push-action@v5
with:
push: true
tags: myorg/api:${{ github.sha }}
- name: Update manifest with digest
run: |
yq -i '.spec.template.spec.containers[0].image = "myorg/api@${{ steps.build.outputs.digest }}"' \
k8s/deployment.yaml
Commit the updated manifest back to the repo. If you're on Argo CD or Flux, that commit is the deployment.
Don't have GitOps yet? Kustomize can resolve digests at apply time:
# kustomization.yaml
images:
- name: myorg/api
newTag: v1.4.2
digest: sha256:3f7e1b9c5d2a8f4e6b1d0c9a8f7e6d5c4b3a2918f7e6d5c4b3a291807f6e5d4c
Lock the registry too
Pinning digests in manifests doesn't stop someone re-tagging on the registry side. If you control the registry, turn on tag immutability:
- ECR: set
imageTagMutability: IMMUTABLEon the repository. - Artifact Registry (GCP): enable Immutable image tags on the repo.
- GHCR / Docker Hub: no native immutability — enforce in CI by refusing to push if the tag already exists.
kubectl describe pod shows you a digest you can grep for in your CI logs, you can debug an outage in minutes. If it shows :latest, you can't.
Migration order
- Turn on tag immutability in the registry. New pushes start being safe.
- Update CI to write digests into manifests. New deployments become reproducible.
- Re-deploy existing services so their live spec contains a digest. Now rollback works.
Total effort: an afternoon for most teams. Total payoff: every future incident gets shorter.
Need help wiring this into your existing pipeline? We do this kind of clean-up for engineering teams in Greece and across Europe — say hi.