Please Help Provide Humanitarian Aid for Ukraine

Private Registries

This guide discusses how to use kind with image registries that require authentication.

There are multiple ways to do this, which we try to cover here.

Contents 🔗︎

Use ImagePullSecrets 🔗︎

Kubernetes supports configuring pods to use imagePullSecrets for pulling images. If possible, this is the preferable and most portable route.

See the upstream kubernetes docs for this, kind does not require any special handling to use this.

If you already have the config file locally but would still like to use secrets, read through kubernetes’ docs for creating a secret from a file.

Pull to the Host and Side-Load 🔗︎

kind can load an image from the host with the kind load ... commands. If you configure your host with credentials to pull the desired image(s) and then load them to the nodes you can avoid needing to authenticate on the nodes.

Add Credentials to the Nodes 🔗︎

Generally the upstream docs for using a private registry apply, with kind there are two options for this.

Mount a Config File to Each Node 🔗︎

If you pre-create a docker config.json containing credential(s) on the host you can mount it to each kind node.

Assuming your file is at /path/to/my/secret.json, the kind config would be:

kind: Cluster
- role: control-plane
  - containerPath: /var/lib/kubelet/config.json
    hostPath: /path/to/my/secret.json

Use an Access Token 🔗︎

A credential can be programmatically added to the nodes at runtime.

If you do this then kubelet must be restarted on each node to pick up the new credentials.

An example shell snippet for generating a cred file on your host machine using Access Tokens:

set -o errexit

# desired cluster name; default is "kind"

# create a temp file for the docker config
echo "Creating temporary docker client config directory ..."
DOCKER_CONFIG=$(mktemp -d)
trap 'echo "Removing ${DOCKER_CONFIG}/*" && rm -rf ${DOCKER_CONFIG:?}' EXIT

echo "Creating a temporary config.json"
# This is to force the omission of credsStore, which is automatically
# created on supported system. With credsStore missing, "docker login"
# will store the password in the config.json file.
cat <<EOF >"${DOCKER_CONFIG}/config.json"
 "auths": { "": {} }
# login to gcr in DOCKER_CONFIG using an access token
echo "Logging in to GCR in temporary docker client config directory ..."
gcloud auth print-access-token | \
  docker login -u oauth2accesstoken --password-stdin

# setup credentials on each node
echo "Moving credentials to kind cluster name='${KIND_CLUSTER_NAME}' nodes ..."
for node in $(kind get nodes --name "${KIND_CLUSTER_NAME}"); do
  # the -oname format is kind/name (so node/name) we just want name
  # copy the config to where kubelet will look
  docker cp "${DOCKER_CONFIG}/config.json" "${node_name}:/var/lib/kubelet/config.json"
  # restart kubelet to pick up the config
  docker exec "${node_name}" systemctl restart kubelet.service

echo "Done!"

Use a Service Account 🔗︎

Access tokens are short lived, so you may prefer to use a Service Account and keyfile instead. First, either download the key from the console or generate one with gcloud:

gcloud iam service-accounts keys create <output.json> --iam-account <account email>

Then, replace the gcloud auth print-access-token | ... line from the access token snippet with:

cat <output.json> | docker login -u _json_key --password-stdin

See Google's upstream docs on key file authentication for more details.

Use a Certificate 🔗︎

If you have a registry authenticated with certificates, and both certificates and keys reside on your host folder, it is possible to mount and use them into the containerd plugin patching the default configuration, like in the example:

kind: Cluster
  - role: control-plane
    # This option mounts the host docker registry folder into
    # the control-plane node, allowing containerd to access them. 
      - containerPath: /etc/docker/certs.d/
        hostPath: /etc/docker/certs.d/
  - |-
      cert_file = "/etc/docker/certs.d/"
      key_file  = "/etc/docker/certs.d/"