Honing the craft.

You are here: You're reading a post

Installing K3s with experimental SELinux support

K3s is a certified lightweight Kubernetes distribution suitable for resource constrained environments, developed by Rancher and the open source community. In this post, I am sharing my experience with installing K3s on CentOS 8.0 with SELinux enabled.

When Red Hat Enterprise Linux 8.0 was released, it became official, that Red Hat wants to move their userbase running containers away from Docker. For developing and running simple autonomous containers (pods), there is Podman and Buildah, but I wanted an orchestration solution at least on par with Docker Swarm for one of my proof of concept lab machines that has NAS level specs: a low power CPU with 4 cores, 4 Gib of RAM and lots of storage.

Sure, you can use systemd to manage containers for a small scale production deployment, that doesn't change frequently, however that is not true for this system and I assume many others are in the same shoes. There is OpenShift, and a few more Kubernetes distributions that I really love, but most of them have too big of a footprint for the environment in question, hence I had to look for niche options. After doing a little research on lightweight Kubernetes distributions, I have discovered K3s and gave it a whirl in my lab. Here is my initial experience.

Install Red Hat or CentOS Linux 8

I have deployed a minimal installation of CentOS 8. You might want to install a different preset install flavor depending on your needs (desktop environment, developer tools, etc.), but from K3s' perspective, the minimal install is satisfactory. To use the storage class backed by the Local Path Provisioner deployed with K3s out of the box, you have to make sure, that /var/lib/rancher/k3s/ has plenty of storage space, since the persistent storage claimed by the pods will be stored there, as well as most if not all of the state information of the Kubernetes engine.

Installing SELinux packages

K3s has experimental support for SELinux, I probably wouldn't deploy it like that in a production environment for now, but if you want to help with the development of the SELinux support in K3s, or have increased security with the risk of potential problems along the way, you can keep it turned on.

Since this is an experimental lab system for me, I have decided to leave SELinux turned on and I have installed the required packages after updating the system:

# yum update
# yum install container-selinux selinux-policy-base
# curl -o k3s-selinux-0.1.1-rc1.el7.noarch.rpm https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm
# yum localinstall k3s-selinux-0.1.1-rc1.el7.noarch.rpm

NOTE: You will get an informative error message if you try to install K3s first and you have SELinux enabled and set to Enforcing.

[ERROR]  Failed to apply container_runtime_exec_t to /usr/local/bin/k3s, please install:
    yum install -y container-selinux selinux-policy-base
    rpm -i https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm

Hence, trying to install K3s first might be the best approach to find out what is the required course of action as the information above might get outdated.

Install K3s

Installing K3s is super easy, you just have to run the following command if you want to deploy K3s on the given node as both a server (controller) and agent (worker) node of the Kubernetes cluster:

# curl -sfL https://get.k3s.io | sh -

For control over how K3s will be installed and configured, you can use the environment variables provided for server and agent functions.

Enable bash completion for kubectl

Having the bash-completion package installed is a prerequisite for the following to work for kubectl:

# yum install bash-completion

Then, execute the following command to have command line completion enabled for kubectl:

# echo -en '\nsource <(kubectl completion bash)\n' >> ~/.bashrc

For the change to take effect, log out of your shell, then log back in, or simply source your .bashrc script.

An issue with SELinux and the Local Path Provisioner

Due to an issue with the SELinux policy provided with K3s, the local Path Provisioner bundled with K3s is unable to bind Persistent Volume Claims and containers are unable to write these volumes as the file context set for the /var/lib/rancher/k3s/storage path where all these locally persisted volumes are stored doesn't allow containers to create or write files or directories.

You can overcome this problem by adding the following rule to the local policy and relabeling everything under this path hierarchy before starting to use the local-path storage class:

# semanage fcontext -a -t container_file_t  "/var/lib/rancher/k3s/storage(/.*)?"
# restorecon -R  /var/lib/rancher/k3s/storage/

In the meantime, I also have submitted a pull request to rectify this problem in the K3s policy repo, so you might want to check if this has been merged into the master branch, it very well might be, that you don't need this workaround any longer by the time you read this.

Alternatively, you can change the SELinux state to permissive, until this gets resolved:

# setenforce 0
# sed -i.bak -E 's/^(SELINUX=).*$/\1permissive/' /etc/selinux/config

SELinux won't interfere with your cluster this way, but you can still monitor /var/log/audit/audit.log to see if any operations would be blocked otherwise.

Tuning the firewall

The most likely scenario for a Kubernetes cluster node is that firewalld will be disabled and rules will be set up using iptables (nftables), but in case you want something, that works out of the box with the default firewalld setup, you have to add the cni0 interface to the trusted zone:

# firewall-cmd --zone=trusted --add-interface=cni0 --permanent
# firewall-cmd --reload

The master node of your cluster is ready now

That's all, now you should be ready to work with your first cluster node:

# kubectl get nodes
NAME                           STATUS   ROLES    AGE    VERSION
k3s-server.example.com         Ready    master   4d7h   v1.18.3+k3s1

My brief experience using K3s with this setup

In the last few days, I have deployed combinations of various deployments, services and ingresses and I have encountered no issues. Deployments using Helm Charts also worked without a hitch. So far so good.