This tutorial describes how to install, configure, and run the Kubernetes container orchestration system on Clear Linux* OS using different container engines and runtimes.

Kubernetes* is an open source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

Runc and Kata Containers* kata-runtime adhere to OCI guidelines and work seamlessly with Kubernetes. Kata Containers provide strong isolation for untrusted workloads or multi-tenant scenarios. Runc and Kata Containers can be allocated on a per-pod basis so you can mix and match both on the same host to suit your needs.

This tutorial describes the following combinations:

  • Kubernetes with Docker and runc
  • Kubernetes with CRI-O and kata-runtime


This tutorial assumes you have installed Clear Linux OS and updated to the latest release on your host system. You can learn about the benefits of having an up-to-date system for cloud orchestration on the Software update page. For detailed instructions on installing Clear Linux OS on a bare metal system, follow the bare metal installation tutorial.

Before you install any new packages, update Clear Linux OS with the following command:

sudo swupd update

Install Kubernetes and CRI runtimes

Kubernetes and a set of supported CRI runtimes are included in the cloud-native-basic bundle. To install the framework, enter the following command:

sudo swupd bundle-add cloud-native-basic

Configure Kubernetes

This tutorial uses the basic default Kubernetes configuration for simplicity. You must define your Kubernetes configuration according to your specific deployment and your security needs.

  1. Enable IP forwarding to avoid kubeadm preflight check errors:

    Create (or edit if it exists) the file /etc/sysctl.d/60-k8s.conf and include the following line:

    net.ipv4.ip_forward = 1

    Apply the change:

    sudo systemctl restart systemd-sysctl
  2. Enable the kubelet service:

    sudo systemctl enable kubelet.service
  3. Disable swap using one of the following methods, either:

    1. Temporarily:

      sudo swapoff -a


      Swap will be enabled at next reboot, causing failures in your cluster.


    1. Permanently:

      Mask the swap partition:

      sudo systemctl mask $(sed -n -e 's#^/dev/\([0-9a-z]*\).*#dev-\1.swap#p' /proc/swaps) 2>/dev/null
      sudo swapoff -a


      On systems with limited resources, some performance degradation may be observed while swap is disabled.

  4. Switch to root to modify hostname:

    sudo -s
  5. Create (or edit if it exists) the hosts file that Kubernetes will read to locate the master’s host:

    echo " localhost `hostname`" >> /etc/hosts
  6. Exit root:


Configure and run Kubernetes

This section describes how to configure and run Kubernetes with:

  • Docker and runc
  • CRI-O and kata-runtime

Configure and run Docker + runc

  1. Enable the Docker service:

    sudo systemctl enable docker.service
  2. Create (or edit if it exists) the file /etc/systemd/system/docker.service.d/51-runtime.conf and include the following lines:

    Environment="DOCKER_DEFAULT_RUNTIME=--default-runtime runc"
  3. Create (or edit if it exists) the file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and include the following lines:

  4. Enter the commands:

    sudo systemctl daemon-reload
    sudo systemctl restart docker
    sudo systemctl restart kubelet
  5. Initialize the master control plane with the command:

    sudo kubeadm init --ignore-preflight-errors=SystemVerification

Configure and run CRI-O + kata-runtime

  1. Enable the CRI-O service:

    sudo systemctl enable crio.service
  2. Enter the commands:

    sudo systemctl daemon-reload
    sudo systemctl restart crio
    sudo systemctl restart kubelet
  3. Initialize the master control plane with the command:

    sudo kubeadm init --cri-socket=/run/crio/crio.sock

Install pod network add-on

You must choose and install a pod network add-on to allow your pods to communicate. Check whether or not your add-on requires special flags when you initialize the master control plane.

Notes about flannel add-on

If you choose the flannel add-on, then you must add the following to the kubeadm init command:


If you are using CRI-O and flannel and you want to use Kata Containers, edit the /etc/crio/crio.conf file to add:

manage_network_ns_lifecycle = true

Notes about Weave Net add-on

If you choose the Weave Net add-on, then you must make the following changes because it installs itself in the /opt/cni/bin directory.

If you are using Docker and Weave Net, edit the kubeadm.conf file to add:

Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"

If you are using CRI-O and Weave Net, you must complete the following steps.

  1. Edit the /etc/crio/crio.conf file to change plugin_dir from:

    plugin_dir = "/usr/libexec/cni/"


    plugin_dir = "/opt/cni/bin"
  2. Add the loopback CNI plugin to the plugin path with the command:

    sudo ln -s /usr/libexec/cni/loopback /opt/cni/bin/loopback

Use your cluster

Once your master control plane is successfully initialized, instructions on how to use your cluster and its IP, token, and hash values are displayed. It is important that you record the cluster values because they are needed when joining worker nodes to the cluster. Some values have a valid period. The values are presented in a format similar to:

kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash <hash>


You’ve successfully installed and set up Kubernetes in Clear Linux OS using Docker and runc or CRI-O and kata-runtime. You are now ready to follow on-screen instructions to deploy a pod network to the cluster and join worker nodes with the displayed token and IP information.

Package configuration customization in Clear Linux OS (optional)

Clear Linux OS is a stateless system that looks for user-defined package configuration files in the /etc/<package-name> directory to be used as default. If user-defined files are not found, Clear Linux OS uses the distribution-provided configuration files for each package.

If you customize any of the default package configuration files, you must store the customized files in the /etc/ directory. If you edit any of the distribution-provided default files, your changes will be lost in the next system update.

For example, to customize CRI-O configuration in your system, run the following commands:

sudo mkdir /etc/crio
sudo cp /usr/share/defaults/crio/crio.conf /etc/crio/
sudo $EDITOR /etc/crio/crio.conf

Learn more about Stateless in Clear Linux OS and view the Clear Linux OS documentation.

Proxy configuration (optional)

If you use a proxy server, you must set your proxy environment variables and create an appropriate proxy configuration file for both CRI-O and Docker services. Consult your IT department if you are behind a corporate proxy for the appropriate values. Ensure that your local IP is explicitly included in the environment variable NO_PROXY. (Setting localhost is not enough.)

If you have already set your proxy environment variables, run the following commands as a shell script to configure all of these services in one step:

services=('crio' 'docker')
for s in "${services[@]}"; do
sudo mkdir -p "/etc/systemd/system/${s}.service.d/"
cat << EOF | sudo tee "/etc/systemd/system/${s}.service.d/proxy.conf"


  • <HOSTNAME> not found in <IP> message.

    Your DNS server may not be appropriately configured. Try adding an entry to the /etc/hosts file with your host’s IP and Name.

    For example: myhost

    Use the commands hostname and hostname -I to retrieve them.

  • Images cannot be pulled.

    You may be behind a proxy server. Try configuring your proxy settings, using the environment variables HTTP_PROXY, HTTPS_PROXY, and NO_PROXY as required in your environment.

  • Connection refused error.

    If you are behind a proxy server, you may need to add the master’s IP to the environment variable NO_PROXY.

  • Connection timed-out or Access Refused errors.

    You must ensure that the appropriate proxy settings are available from the same terminal where you will initialize the control plane. To verify the proxy settings that Kubernetes will actually use, run the commands:

    echo $HTTP_PROXY
    echo $HTTPS_PROXY
    echo $NO_PROXY

    If the displayed proxy values are different from your assigned values, the cluster initialization will fail. Contact your IT support team to learn how to set the proxy variables permanently, and how to make them available for all the types of access that you will use, such as remote SSH access.

  • Missing environment variables.

    If you are behind a proxy server, pass environment variables by adding -E to the command that initializes the master control plane.

    /* Kubernetes with Docker + runc */
    sudo -E kubeadm init --ignore-preflight-errors=SystemVerification
    /* Kubernetes with CRI-O + kata-runtime */
    sudo -E kubeadm init --cri-socket=/run/crio/crio.sock