Skip to content

Creating a kubernetes kubeadm cluster using Vagrant machines as nodes and Containerd as a container runtime

Notifications You must be signed in to change notification settings

theJaxon/Kontainer8

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kontainer8

Kubeadm Vagrant Ansible Ubuntu

Ingress Locl-path-provisioner

Table of Contents generated with DocToc

Creating a kubernetes cluster using Vagrant machines as nodes and Containerd as a container runtime

  • This role is just an automation for the steps of aCloudGuru CKS Lesson Building a Kubernetes Cluster

  • Parts of the roles are also borrowed from my previous projects for creating K8s cluster

  • Vagrant ansible local provisioner is used to execute the roles on the target hosts

  • Kubernetes version to be used can be modified by changing the fact inside kontainerd role

- set_fact:
    k8s_version: 1.24.0-00 # Change to whatever desired version

Vagrant Machines details

Machine Address FQDN
master 192.168.100.11 master master.com
worker 192.168.100.10 worker worker.com

How to use

# Clone the repo
git clone https://github.com/theJaxon/Kontainer8.git

cd Kontainer8

# Start the machines 
vagrant up 

# SSH into any of the machines 
vagrant ssh < master | worker >

Locally building images

  • Start by installing podman
export image_name=jenkins-local

# Assuming there's a Dockerfile in the current working directory
podman build --tag $image_name .

# Save the image into tar file
podman save $image_name -o $image_name.tar

# Use ctr to import the image 
sudo ctr -n=k8s.io images import $image_name.tar

# Verify that the image is now available for k8s to use 
sudo crictl image ls

> localhost/jenkins-local

Accessing the Ingress from the Host OS

  • Nginx Ingress controller is configured to use nodeport 30000 so that we end up with a fixed port number for the controller
  • Using the controller as a proxy can be done by calling any of the 2 machines
curl --proxy http://192.168.100.10:30000 http://jellyfin.media/web/index.html
curl --proxy http://192.168.100.11:30000 http://jellyfin.media/web/index.html
  • On the host OS one can take advantage of Firefox by setting the proxy to point to the ingress controller

Firefox_proxy

  • You can take this one step further and install a plugin like Proxy Toggle to easily use the controller and revert back to the regular browser settings
  • Once the Ingress controller is set up as the proxy, you can reach the needed services using the configured ingress Jellyfin

Extras

  • Extras role defines additional resources that will be deployed to the kubernetes cluster, it's responsible for creating 2 new namespaces
    1. ingress-nginx - where nginx ingress controller will be deployed
    2. local-path-storage - where local path provisioner will be deployed (local path is the default storage class)

  • One of the quirks i've faced was with installing kubernetes packages (kubeadm, kubectl and kubelet), the problem has to do with the order so i started first by installing kubelet which in turn installed kubectl (and this was breaking the installation since it was installing latest kubectl version not the version i'm specifying) so upon continuing the task another attempt to install kubectl is made with a downgraded version thus ansible errors.
  • The workaround was to change the sequence and start by installing the desired kubectl version

Useful Resources

About

Creating a kubernetes kubeadm cluster using Vagrant machines as nodes and Containerd as a container runtime

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published