Kubernetes Cluster
Kubernetes Cluster Setup for Akash Providers

Kubernetes Cluster Setup for Akash Providers

Overview

Akash leases are deployed via Kubernetes pods on provider clusters. This guide details the build of the provider’s Kubernetes control plane and worker nodes.
The setup of a Kubernetes cluster is the responsibility of the provider. This guide provides best practices and recommendations for setting up a Kubernetes cluster. This document is not a comprehensive guide and assumes pre-existing Kubernetes knowledge.

STEP1 - Clone the Kubespray Project

Cluster Creation Recommendations
We recommend using the Kubespray project to deploy a cluster. Kubespray uses Ansible to make the deployment of a Kubernetes cluster easy.
The recommended minimum number of hosts is three. This is meant to allow:
  • One host to serve as Kubernetes master node & Akash provider
  • One host to serve as a redundant master node
  • One host to serve as Kubernetes worker node to host provider leases
In testing and dev a single host Kubernetes cluster could be used but this configuration is not recommended for production.
Kubespray Clone
Install Kubespray on a machine that has connectivity to the three hosts that will serve as the Kubernetes cluster.
Obtain Kubespray and navigate into the created local directory
1
git clone https://github.com/kubernetes-sigs/kubespray.git
2
3
cd kubespray
Copied!

STEP2 - Install Ansible

When launched Kubespray will use an Ansible playbook to deploy a Kubernetes cluster. In this step we will install Ansible.
Depending on your operating system it may be necessary to install OS patches, pip3, and virtualenv. Example steps for a Ubuntu OS are detailed below.
1
sudo apt-get update
2
3
sudo apt-get install -y python3-pip
4
5
sudo apt install virtualenv
Copied!
Within the kubespray directory use the following commands for the purpose of:
  • Opening a Python virtual environment for the Ansible install
  • Installing Ansible and other necessary packages specified in the requirements.txt file
1
virtualenv --python=python3 venv
2
3
source venv/bin/activate
4
5
pip3 install -r requirements.txt
Copied!

STEP3 - Ansible Access to Kubernetes Cluster

Ansible will configure the Kubernetes hosts via SSH. The user Ansible connects with must be root or have the capability of escalating privileges to root.
Commands in this step provide an example set up of SSH access to Kubernetes hosts and testing those connections.
Create SSH Keys on Ansible Host
1
ssh-keygen
Copied!
  • Accept the defaults to create a public-private key pair
  • The keys will be stored in the user’s home directory.
  • Example of files created:
Copy Public Key to the Kubernetes Hosts
1
ssh-copy-id -i ~/.ssh/id_rsa <username>@<ip-address>
Copied!
Confirm SSH to the Kubernetes Hosts
  • Ansible should be able to access Kubernetes hosts with no password
1
ssh -i ~/.ssh/id_rsa <username>@<ip-address>
Copied!
Example
  • Complete example SSH setup from Ansible host to all Kubernetes hosts

STEP4 - Ansible Inventory

Ansible will use an inventory file to determine the hosts Kubernetes should be installed on.
Inventory File
  • Use the following commands on the Ansible host and in the “kubespray” directory
  • Replace the IP addresses in the declare command with the addresses of your Kubernetes hosts
  • Running these commands will create a hosts.yaml file within the kubespray/inventory/akash directory
1
cp -rfp inventory/sample inventory/akash
2
3
declare -a IPS=(10.0.10.27 10.0.10.113 10.0.10.132)
4
5
CONFIG_FILE=inventory/akash/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Copied!
Expected result
Example of the generated hosts.yaml File

STEP5 - Enable gVisor

In this section we will enable gVisor which provides basic container security.
  • From the “kubespray” directory:
1
cd inventory/akash/group_vars/k8s_cluster
Copied!
  • Using VI or nano edit the k8s-cluster.yml file:
1
vi k8s-cluster.yml
Copied!
  • Update the container_manager key
1
container_manager: containerd
Copied!
  • From the “kubespray” directory:
1
cd inventory/akash/group_vars
Copied!
  • Using VI or nano edit the etcd.yml file:
1
vi etcd.yml
Copied!
  • Update the etcd_deployment_type key
1
etcd_deployment_type: host
Copied!

STEP6 - Create Kubernetes Cluster

With inventory in place we are ready to build the Kubernetes cluster via Ansible.
  • Note - the cluster creation may take several minutes to complete
  • From the “kubespray” directory:
1
ansible-playbook -i inventory/akash/hosts.yaml -b -v --private-key=~/.ssh/id_rsa cluster.yml
Copied!

STEP7 - Confirm Kubernetes Cluster

A couple of quick Kubernetes cluster checks are in order before moving into next steps.
  • SSH into Kubernetes node01 (AKA Kubernetes master node)
Confirm Kubernetes Nodes
1
kubectl get nodes
Copied!
  • Example output from a healthy Kubernetes cluster
Confirm Kubernetes Pods
1
kubectl get pods -n kube-system
Copied!
  • Example output of the pods that are the brains of the cluster

STEP 8 - Custom Resource Definition

Akash uses two Kubernetes Custom Resource Definitions (CRD) to store each deployment.
  • On the Kubernetes master node, download and install the Akash CRD files.
Download the First CRD File
1
wget https://raw.githubusercontent.com/ovrclk/akash/master/pkg/apis/akash.network/v1/crd.yaml
Copied!
Install the First CRD File
1
kubectl apply -f ./crd.yaml
Copied!
Download the Second CRD File
1
wget https://raw.githubusercontent.com/ovrclk/akash/mainnet/main/pkg/apis/akash.network/v1/provider_hosts_crd.yaml
Copied!
Install the Second CRD File
1
kubectl apply -f ./provider_hosts_crd.yaml
Copied!
Confirm the CRD installs
1
kubectl get crd -n kube-system
Copied!
Expected CRD Output

STEP9 - Network Policy

Network Configuration
A network configuration which must be applied to the Kubernetes cluster.
  • On the Kubernetes master node, download the network YAML file
1
wget https://raw.githubusercontent.com/ovrclk/akash/master/_docs/kustomize/networking/network-policy-default-ns-deny.yaml
Copied!
  • Install the YAML File
1
kubectl apply -f ./network-policy-default-ns-deny.yaml
Copied!

STEP10 - Ingress Controller

The Akash provider requires an ingress controller in the Kubernetes cluster.
Ingress Controller Install
  • On the Kubernetes master node, download the ingress controller YAML file
1
wget https://raw.githubusercontent.com/ovrclk/akash/master/_run/ingress-nginx.yaml
Copied!
  • Install the YAML File
1
kubectl apply -f ./ingress-nginx.yaml
Copied!
  • Expected result
Ingress Controller Configuration
A Kubernetes node needs to be labeled for ingress use. This will cause the NGINX ingress controller to live only on the labeled node.
NOTE - if a wildcard domain is created, the pointers should point to the labeled node's IP address. Additional nodes can be labeled to load balance any ingress communications.
1
kubectl label nodes node3 akash.network/role=ingress
Copied!
Last modified 27d ago