Akash Guidebook

Build a Cloud Provider

Prerequisites of an Akash Provider

Wallet Funding - Minimum of 5 AKT

Placing a bid on an order requires a 5 AKT deposit. This deposit is fully refunded after the bid is won/lost.
The steps to create an Akash account are covered in the Provider setup section of this document.

Kubernetes Cluster

  • A full Kubernetes cluster is required.
  • The cluster must have outbound internet access and be reachable from the internet.
  • Please use this guide for ALL Kubernetes related configurations. This guide covers a full cluster build, should it be needed, AND important details for new/pre-existing cluster configurations of custom resource definitions and ingress controllers for the Akash provider.

Custom Kubernetes Cluster Settings

Akash Providers are deployed in many environments and we will make additions to these sections as when nuances are discovered.

Quickstart Guides

Create a Kubernetes cluster and start your first provider
Already have a Kubernetes cluster? Start here!

Akash Provider Setup

Provider Setup Overview

The following sections will explore each step of the Akash provider setup in detail.

STEP1 - Select a Host to Run the Akash Provider

The Akash provider can be installed on any Kubernetes master or worker node. Or if preferred the provider may be installed on a separate host outside of the Kubernetes cluster.
  • NOTE - if the Provider is installed on a Kubernetes host - ensure that it does not reside on the same host as the Ingress Controller as this could cause TCP port 8443 conflicts.

STEP2 - Install Akash Software

The Akash software install process on a Linux server is shown in this step.
Specify the Akash Version
  • These commands will retrieve the latest, stable version of the Akash software, store the version in a local variable, and install that version.
AKASH_VERSION="$(curl -s "https://raw.githubusercontent.com/ovrclk/net/master/mainnet/version.txt")"
curl https://raw.githubusercontent.com/ovrclk/akash/master/godownloader.sh | sh -s -- "v$AKASH_VERSION"
Add Akash Install Location to User’s Path
Add the software’s install location to the user’s path for easy use of Akash commands.
NOTE - below we provide the steps to add the Akash install directory to a user’s path on a Linux Ubuntu server. Please take a look at a guide for your operating system and how to add a directory to a user’s path.
Open the user’s path file in an editor:
vi /etc/environment
View within text editor prior to the update:
Add the following directory, which is the Akash install location, to PATH:
View within the text editor following the update:
Make the new path active in the current session:
​​source /etc/environment
Display the version of Akash software installed. This confirms the software installed and that the new user path addition worked.
akash version
Expected result:
[email protected]:~# akash version

STEP3 - Create/Import Akash Account

For a Provider to bid on leases an account is needed with minimum funding of 5 AKT. An account can be created by using the commands of this section. Alternatively an existing account could be imported for provider use.
Specify a key with your choice of name:
Specify the location of the keyring on the provider:
Create the new account and store the encrypted private key in the keyring
  • Enter a passphrase of your choice when prompted
akash --keyring-backend "$AKASH_KEYRING_BACKEND" keys add "$AKASH_KEY_NAME"
Expected results:
[email protected]:~# AKASH_KEY_NAME=providerkey
[email protected]:~# ​​AKASH_KEYRING_BACKEND=file
[email protected]:~# akash --keyring-backend "$AKASH_KEYRING_BACKEND" keys add "$AKASH_KEY_NAME"
Enter keyring passphrase:
Re-enter keyring passphrase:
- name: providerkey
type: local
address: akash16hxyzpwgp9elpl52yvll9gczr3vyanfgmdvh4x
pubkey: akashpub1addwnpepqwz556cp568gk6tj9yxmqshmqad6pj0nnfnqzfufez2fd2jh94fr6y763nc
mnemonic: ""
threshold: 0
pubkeys: []
**Important** write this mnemonic phrase in a safe place.
It is the only way to recover your account if you ever forget your password.
escape dry gate <redacted> prosper human

STEP4 - Verify Account Balance

We should verify the minimum account balance for the provider now that the account has been set up. As mentioned, the account needs slightly more than 5 AKT at a minimum.
Specify the Akash network to query (in this case the mainnet):
Query the network for an available node to communicate with:
export AKASH_NODE="$(curl -s "$AKASH_NET/rpc-nodes.txt" | shuf -n 1)"
Store the account created in the previous step. Replace the variable portion with your account address (I.e. account such as akash1wpfyf47tzu70q3vu893mghz657gk2kgkuaj5zq):
Get your account balance:
akash --node "$AKASH_NODE" query bank balances "$AKASH_ACCOUNT_ADDRESS"
Example output:
[email protected]:~# akash --node "$AKASH_NODE" query bank balances "$AKASH_ACCOUNT_ADDRESS"
- amount: "15000000"
denom: uakt
next_key: null
total: "0"

STEP5 - Create the Provider

Use the host that the Akash software was installed on for this section.
Deployment Domain
  • Create the environment variable of PROVIDER_AKASH_DOMAIN
  • This domain is used whenever a lease owner needs to speak directly with the provider to send a manifest or get a lease status
export PROVIDER_AKASH_DOMAIN=<provider-host-domain-name>
Create provider.yaml File
  • Create a file with the name of provider.yaml and add the contents below
  • NOTE - Please replace <PROVIDER_AKASH_DOMAIN> variable with your akash domain (i.e. provider.<yourdomain>.com)
  • Attributes - a thorough discussion of provider attributes can be found here.
host: https://<PROVIDER_AKASH_DOMAIN>:8443
- key: host
value: <nameOfYourOrganization>
Create the Akash Provider
  • Register the provider on the Akash Network
  • Three new environment variables are added that the provider create command will use
  • Replace the AKASH_PROVIDER_KEY with the name of the key created earlier (I.e. providerkey in the example)
  • Replace the AKASH_HOME with the location of the keychain (I.e. /root/.akash in the example)
export AKASH_CHAIN_ID="$(curl -s "$AKASH_NET/chain-id.txt")"
export AKASH_GAS_PRICES=0.025uakt
export AKASH_GAS=auto
akash tx provider create provider.yaml --from $AKASH_PROVIDER_KEY --home=$AKASH_HOME --keyring-backend=$AKASH_KEYRING_BACKEND --node=$AKASH_NODE --chain-id=$AKASH_CHAIN_ID
Example of Creating the Provider
export AKASH_CHAIN_ID="$(curl -s "$AKASH_NET/chain-id.txt")"
[email protected]:~# akash tx provider create provider.yaml --from $AKASH_PROVIDER_KEY --home=$AKASH_HOME --keyring-backend=$AKASH_KEYRING_BACKEND --node=$AKASH_NODE --chain-id=$AKASH_CHAIN_ID
Enter keyring passphrase:
confirm transaction before signing and broadcasting [y/N]: y

STEP6 - Create a TLS Certificate

Generate Server Certificate

  • Note: If it errors with Error: certificate error: cannot overwrite certificate, then add --overwrite should you want to overwrite the cert. Normally you can ignore that error and proceed with publishing the cert (next step).
akash tx cert generate server $PROVIDER_AKASH_DOMAIN --chain-id $AKASH_CHAIN_ID --keyring-backend $AKASH_KEYRING_BACKEND --from $AKASH_PROVIDER_KEY --home=$AKASH_HOME --node=$AKASH_NODE --gas-prices="0.025uakt" --gas="auto" --gas-adjustment=1.15

Publish Certificate

akash tx cert publish server --chain-id $AKASH_CHAIN_ID --keyring-backend $AKASH_KEYRING_BACKEND --from $AKASH_PROVIDER_KEY --home=$AKASH_HOME --node=$AKASH_NODE --gas-prices="0.025uakt" --gas="auto" --gas-adjustment=1.15

STEP7 - Configure Kubectl

If the provider is on a non-Kubernetes master node, kubectl and the kubeconfig file might not be present. In this step we will create the kubeconfig file on the provider host which is necessary when we try to start the provider.
Verify Kubeconfig File
  • On the provider host, verify if the kubeconfig file is present
  • We are looking for the presence of the .kube directory within the user’s home directory
cd ~
ls -al
Example output of directory contents
  • In this example the .kube directory does not exist and we will need to create it
  • If the directory does exist and you are able to conduct kubectl commands (I.e. “kubectl get nodes”), feel free to skip forward to STEP8
total 56
drwx------ 8 root root 4096 Nov 8 21:23 .
drwxr-xr-x 19 root root 4096 Nov 1 14:53 ..
drwx------ 5 root root 4096 Nov 8 21:06 .akash
drwx------ 3 root root 4096 Nov 2 18:49 .ansible
-rw------- 1 root root 74 Nov 2 16:38 .bash_history
-rw-r--r-- 1 root root 3106 Dec 5 2019 .bashrc
drwx------ 2 root root 4096 Nov 2 16:38 .cache
-rw-r--r-- 1 root root 161 Dec 5 2019 .profile
drwx------ 2 root root 4096 Nov 1 14:53 .ssh
-rw------- 1 root root 7349 Nov 8 21:03 .viminfo
drwxr-xr-x 2 root root 4096 Nov 8 20:38 bin
-rw-r--r-- 1 root root 118 Nov 8 21:03 provider.yaml
drwxr-xr-x 4 root root 4096 Nov 1 14:53 snap
Create a .kube Directory
mkdir .kube
Copy Kubeconfig to the Provider
  • We will use the following command to copy the config file from the Kubernetes master to the provider host
  • Replace the username and IP address parts of the command
scp <username>@<ipaddress>:/root/.kube/config /root/.kube/config
Install Kubectl on the Provider
stable=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)
curl -LO https://storage.googleapis.com/kubernetes-release/release/${stable}/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Verify Kubectl
  • Following the copy of the kubeconfig file and the kubectl install, you should be able to execute commands like “kubectl get nodes” as shown in example below
[email protected]:~# kubectl get nodes
node1 Ready control-plane,master 6d2h v1.22.3
node2 Ready control-plane,master 6d2h v1.22.3
node3 Ready <none> 6d2h v1.22.3

STEP8 - Start the Provider

In our final step the provider is started. We will run the provider process as a service to ensure it remains active across server reboots.
Domain Name Notes
  • The variable used as the value for --cluster-public-hostname during provider start up and is the publicly accessible hostname of the Kubernetes cluster.
  • If multiple master nodes exist in the Kubernetes cluster, either the DNS record should point to the IP addresses of all master nodes or an alternative load balancing strategy should be used.
Store Keyring Passphrase
echo "mypassword" | tee /root/akash/key-pass.txt
Create Script
cat > /root/akash/start-provider.sh << 'EOF'
#!/usr/bin/env bash
export AKASH_NET="https://raw.githubusercontent.com/ovrclk/net/master/mainnet"
export AKASH_NODE="$(curl -s "$AKASH_NET/rpc-nodes.txt" | shuf -n 1)"
export AKASH_HOME=/root/.akash
export AKASH_CHAIN_ID="$(curl -s "$AKASH_NET/chain-id.txt")"
cd /root/akash
( sleep 2s; cat key-pass.txt; cat key-pass.txt ) | \
/root/bin/akash provider run \
--home $AKASH_HOME \
--chain-id $AKASH_CHAIN_ID \
--node $AKASH_NODE \
--keyring-backend=file \
--fees 1000uakt \
--kubeconfig $KUBECONFIG \
--cluster-k8s true \
--deployment-ingress-domain $PROVIDER_INGRESS_DOMAIN \
--deployment-ingress-static-hosts true \
--bid-price-strategy scale \
--bid-price-cpu-scale 0.001 \
--bid-price-memory-scale 0.001 \
--bid-price-storage-scale 0.00001 \
--bid-price-endpoint-scale 0 \
--bid-deposit 5000000uakt \
--cluster-node-port-quantity 1000 \
--cluster-public-hostname $PROVIDER_AKASH_DOMAIN
Make Script Executable
chmod +x /root/akash/start-provider.sh
Create Service
cat > /etc/systemd/system/akash-provider.service << 'EOF'
Description=Akash Provider
Start and Persist the Provider Service
systemctl daemon-reload
systemctl start akash-provider
systemctl enable akash-provider
Confirm the Provide Status
journalctl -u akash-provider --since '5 min ago' -f

STEP9 - Create the Hostname Operator Service

cat /etc/systemd/system/akash-hostname-operator.service
Description=Akash Hostname Operator
ExecStart=akash provider hostname-operator

STEP10- Start the Hostname Operator Service

systemctl start akash-hostname-operator
systemctl enable akash-hostname-operator