Build a Cloud Provider

Prerequisites of an Akash Provider

Wallet Funding - Minimum of 5 AKT

Placing a bid on an order requires a 5 AKT deposit. This deposit is fully refunded after the bid is won/lost.
The steps to create an Akash account are covered in the Provider setup section of this document.

Kubernetes Cluster

  • A full Kubernetes cluster is required.
  • The cluster must have outbound internet access and be reachable from the internet.
  • Please use this guide for ALL Kubernetes related configurations. This guide covers a full cluster build, should it be needed, AND important details for new/pre-existing cluster configurations of custom resource definitions and ingress controllers for the Akash provider.
Kubernetes Cluster
Akash Guidebook

Akash Provider Setup

Provider Setup Overview

The following sections will explore each step of the Akash provider setup in detail.

STEP1 - Select a Host to Run the Akash Provider

The Akash provider can be installed on any Kubernetes master or worker node. Or if preferred the provider may be installed on a separate host outside of the Kubernetes cluster.

STEP2 - Install Akash Software

The Akash software install process on a Linux server is shown in this step.
Specify the Akash Version
  • These commands will retrieve the latest, stable version of the Akash software, store the version in a local variable, and install that version.
1
AKASH_VERSION="$(curl -s "https://raw.githubusercontent.com/ovrclk/net/master/mainnet/version.txt")"
2
3
curl https://raw.githubusercontent.com/ovrclk/akash/master/godownloader.sh | sh -s -- "v$AKASH_VERSION"
Copied!
Add Akash Install Location to User’s Path
Add the software’s install location to the user’s path for easy use of Akash commands.
NOTE - below we provide the steps to add the Akash install directory to a user’s path on a Linux Ubuntu server. Please take a look at a guide for your operating system and how to add a directory to a user’s path.
Open the user’s path file in an editor:
1
vi /etc/environment
Copied!
View within text editor prior to the update:
1
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
2
~
3
~
4
~
Copied!
Add the following directory, which is the Akash install location, to PATH:
1
/root/bin
Copied!
View within the text editor following the update:
1
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/bin"
2
~
3
~
4
~
Copied!
Make the new path active in the current session:
1
​​source /etc/environment
Copied!
Display the version of Akash software installed. This confirms the software installed and that the new user path addition worked.
1
akash version
Copied!
Expected result:
1
[email protected]:~# akash version
2
v0.14.0
Copied!

STEP3 - Create/Import Akash Account

For a Provider to bid on leases an account is needed with minimum funding of 5 AKT. An account can be created by using the commands of this section. Alternatively an existing account could be imported for provider use.
Specify a key with your choice of name:
1
AKASH_KEY_NAME=<name>
Copied!
Specify the location of the keyring on the provider:
1
AKASH_KEYRING_BACKEND=file
Copied!
Create the new account and store the encrypted private key in the keyring
  • Enter a passphrase of your choice when prompted
1
akash --keyring-backend "$AKASH_KEYRING_BACKEND" keys add "$AKASH_KEY_NAME"
Copied!
Expected results:
1
[email protected]:~# AKASH_KEY_NAME=providerkey
2
[email protected]:~# ​​AKASH_KEYRING_BACKEND=file
3
[email protected]:~# akash --keyring-backend "$AKASH_KEYRING_BACKEND" keys add "$AKASH_KEY_NAME"
4
Enter keyring passphrase:
5
Re-enter keyring passphrase:
6
7
- name: providerkey
8
type: local
9
address: akash16hxyzpwgp9elpl52yvll9gczr3vyanfgmdvh4x
10
pubkey: akashpub1addwnpepqwz556cp568gk6tj9yxmqshmqad6pj0nnfnqzfufez2fd2jh94fr6y763nc
11
mnemonic: ""
12
threshold: 0
13
pubkeys: []
14
15
16
**Important** write this mnemonic phrase in a safe place.
17
It is the only way to recover your account if you ever forget your password.
18
19
escape dry gate <redacted> prosper human
Copied!

STEP4 - Verify Account Balance

We should verify the minimum account balance for the provider now that the account has been set up. As mentioned, the account needs slightly more than 5 AKT at a minimum.
Specify the Akash network to query (in this case the mainnet):
1
AKASH_NET="https://raw.githubusercontent.com/ovrclk/net/master/mainnet"
Copied!
Query the network for an available node to communicate with:
1
export AKASH_NODE="$(curl -s "$AKASH_NET/rpc-nodes.txt" | shuf -n 1)"
Copied!
Store the account created in the previous step. Replace the variable portion with your account address (I.e. account such as akash1wpfyf47tzu70q3vu893mghz657gk2kgkuaj5zq):
1
AKASH_ACCOUNT_ADDRESS=<account-address>
Copied!
Get your account balance:
1
akash --node "$AKASH_NODE" query bank balances "$AKASH_ACCOUNT_ADDRESS"
Copied!
Example output:
1
[email protected]:~# akash --node "$AKASH_NODE" query bank balances "$AKASH_ACCOUNT_ADDRESS"
2
balances:
3
- amount: "15000000"
4
denom: uakt
5
pagination:
6
next_key: null
7
total: "0"
Copied!

STEP5 - Create the Provider

Use the host that the Akash software was installed on for this section.
Deployment Domain
  • Create the environment variable of DEPLOYMENT_HOSTNAME
  • This domain is used whenever a lease owner needs to speak directly with the provider to send a manifest or get a lease status
  • The public DNS record for the domain should point to the Kubernetes ingress controller
  • 1
    export DEPLOYMENT_HOSTNAME=<provider-host-domain-name>
    Copied!
Create provider.yaml File
  • Create a file with the name of provider.yaml and add the contents below
1
host: https://$DEPLOYMENT_HOSTNAME:8443
2
attributes:
3
- key: host
4
value: <nameOfYourOrganization>
Copied!
  • Review the following for a discussion on Akash standard attributes and how/why to use on your provider.
Akash Audited Attributes
Akash Guidebook
Example of Creating Provider File
  • These screenshots shows the previous steps for further clarity
1
[email protected]:~# export DEPLOYMENT_HOSTNAME=chainzeroakash.net
2
[email protected]:~# vi provider.yaml
Copied!
  • File details within an editor
1
host: https://$DEPLOYMENT_HOSTNAME:8443
2
attributes:
3
- key: host
4
value: chainzero
5
~
6
~
Copied!
Create the Akash Provider
  • Register the provider on the Akash Network
  • Three new environment variables are added that the provider create command will use
  • Replace the AKASH_PROVIDER_KEY with the name of the key created earlier (I.e. providerkey in the example)
  • Replace the AKASH_HOME with the location of the keychain (I.e. /root/.akash in the example)
1
export AKASH_CHAIN_ID="$(curl -s "$AKASH_NET/chain-id.txt")"
2
AKASH_PROVIDER_KEY=<key-name>
3
AKASH_HOME=<keyring-location>
Copied!
1
akash tx provider create provider.yaml --from $AKASH_PROVIDER_KEY --home=$AKASH_HOME --keyring-backend=$AKASH_KEYRING_BACKEND --node=$AKASH_NODE --chain-id=$AKASH_CHAIN_ID --fees 5000uakt
Copied!
Example of Creating the Provider
1
export AKASH_CHAIN_ID="$(curl -s "$AKASH_NET/chain-id.txt")"
2
AKASH_PROVIDER_KEY=providerkey
3
AKASH_HOME=/root/.akash
Copied!
1
[email protected]:~# akash tx provider create provider.yaml --from $AKASH_PROVIDER_KEY --home=$AKASH_HOME --keyring-backend=$AKASH_KEYRING_BACKEND --node=$AKASH_NODE --chain-id=$AKASH_CHAIN_ID --fees 5000uakt
2
3
Enter keyring passphrase:
4
5
{"body":{"messages":[{"@type":"/akash.provider.v1beta1.MsgCreateProvider","owner":"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx","host_uri":"https://$DEPLOYMENT_HOSTNAME:8443","attributes":[{"key":"host","value":"chainzero"}],"info":{"email":"","website":""}}],"memo":"","timeout_height":"0","extension_options":[],"non_critical_extension_options":[]},"auth_info":{"signer_infos":[],"fee":{"amount":[{"denom":"uakt","amount":"5000"}],"gas_limit":"200000","payer":"","granter":""}},"signatures":[]}
6
7
confirm transaction before signing and broadcasting [y/N]: y
8
9
{"height":"3413672","txhash":"E9CA2D1ED5FF449E132531C9F6CCBD41F95F01D71C745005E00432852204C564","codespace":"","code":0,"data":"0A110A0F6372656174652D70726F7669646572","raw_log":"[{\"events\":[{\"type\":\"akash.v1\",\"attributes\":[{\"key\":\"module\",\"value\":\"provider\"},{\"key\":\"action\",\"value\":\"provider-created\"},{\"key\":\"owner\",\"value\":\"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx\"}]},{\"type\":\"message\",\"attributes\":[{\"key\":\"action\",\"value\":\"create-provider\"},{\"key\":\"sender\",\"value\":\"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx\"}]},{\"type\":\"transfer\",\"attributes\":[{\"key\":\"recipient\",\"value\":\"akash17xpfvakm2amg962yls6f84z3kell8c5lazw8j8\"},{\"key\":\"sender\",\"value\":\"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx\"},{\"key\":\"amount\",\"value\":\"5000uakt\"}]}]}]","logs":[{"msg_index":0,"log":"","events":[{"type":"akash.v1","attributes":[{"key":"module","value":"provider"},{"key":"action","value":"provider-created"},{"key":"owner","value":"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx"}]},{"type":"message","attributes":[{"key":"action","value":"create-provider"},{"key":"sender","value":"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx"}]},{"type":"transfer","attributes":[{"key":"recipient","value":"akash17xpfvakm2amg962yls6f84z3kell8c5lazw8j8"},{"key":"sender","value":"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx"},{"key":"amount","value":"5000uakt"}]}]}],"info":"","gas_wanted":"200000","gas_used":"64055","tx":null,"timestamp":""}
10
Copied!

STEP6 - Create a TLS Certificate

Create a TLS certificate for your provider. The certificate will be stored on the blockchain.
1
akash tx cert create server $DEPLOYMENT_HOSTNAME --chain-id $AKASH_CHAIN_ID --keyring-backend $AKASH_KEYRING_BACKEND --from $AKASH_PROVIDER_KEY --home=$AKASH_HOME --node=$AKASH_NODE --fees 5000uakt
Copied!
Example of Creating the Certificate
2
3
[email protected]:~# akash tx cert create server $DEPLOYMENT_HOSTNAME --chain-id $AKASH_CHAIN_ID --keyring-backend $AKASH_KEYRING_BACKEND --from $AKASH_PROVIDER_KEY --home=$AKASH_HOME --node=$AKASH_NODE --fees 5000uakt
4
5
Enter keyring passphrase:
6
7
no certificate found for address akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx. generating new...
8
9
Enter keyring passphrase:
10
11
{"body":{"messages":[{"@type":"/akash.cert.v1beta1.MsgCreateCertificate","owner":"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx","cert":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNFRENDQWJhZ0F3SUJBZ0lJRnJYcVY3ZU9BRTR3Q2dZSUtvWkl6ajBFQXdJd1NqRTFNRE1HQTFVRUF4TXMKWVd0aGMyZ3hlRzE2T1dWek9XRjVPV3h1T1hneWJUTnhPR1JzZFRCaGJIaG1NR3gwWTJVM2VXdHFabmd4RVRBUApCZ1ZuZ1FVQ0JoTUdkakF1TUM0eE1CNFhEVEl4TVRFd09URTFNamd5TWxvWERUSXlNVEV3T1RFMU1qZ3lNbG93ClNqRTFNRE1HQTFVRUF4TXNZV3RoYzJneGVHMTZPV1Z6T1dGNU9XeHVPWGd5YlROeE9HUnNkVEJoYkhobU1HeDAKWTJVM2VXdHFabmd4RVRBUEJnVm5nUVVDQmhNR2RqQXVNQzR4TUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowRApBUWNEUWdBRTYrY1Q0ZkprQ3FjK01ibGdKdldjREhLK3BGL1JLb241V3NLSGdqODZDNnZpT2dGa3ZhRzVocGdZCkV2SXl2YkwvdHNxdjFtZ0I3ZzNnRG1ZTnNFaSt0YU9CaFRDQmdqQU9CZ05WSFE4QkFmOEVCQU1DQkRBd0hRWUQKVlIwbEJCWXdGQVlJS3dZQkJRVUhBd0lHQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwUgpCQll3RklJU1kyaGhhVzU2WlhKdllXdGhjMmd1Ym1WME1DUUdBMVVkSGdFQi93UWFNQmlnRmpBVWdoSmphR0ZwCmJucGxjbTloYTJGemFDNXVaWFF3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQU01c3NzaWJ6alpsRmdBWE9vdVQKTWc5YlBUeFBGNHNTZGNzcUFwOW9xSjh2QWlCQ2V0c2pwanlXWUhmdFBELzV0eGJVNFhqNUg4NWltYzY2d0lHSApqUFZCNnc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==","pubkey":"LS0tLS1CRUdJTiBFQyBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNitjVDRmSmtDcWMrTWJsZ0p2V2NESEsrcEYvUgpLb241V3NLSGdqODZDNnZpT2dGa3ZhRzVocGdZRXZJeXZiTC90c3F2MW1nQjdnM2dEbVlOc0VpK3RRPT0KLS0tLS1FTkQgRUMgUFVCTElDIEtFWS0tLS0tCg=="}],"memo":"","timeout_height":"0","extension_options":[],"non_critical_extension_options":[]},"auth_info":{"signer_infos":[],"fee":{"amount":[{"denom":"uakt","amount":"5000"}],"gas_limit":"200000","payer":"","granter":""}},"signatures":[]}
12
13
confirm transaction before signing and broadcasting [y/N]: y
14
15
{"height":"3413739","txhash":"7E7D6E588956E39607DF7986A7B1FF75327D8456C3D18A2581BABDC5EB24E623","codespace":"","code":0,"data":"0A190A17636572742D6372656174652D6365727469666963617465","raw_log":"[{\"events\":[{\"type\":\"message\",\"attributes\":[{\"key\":\"action\",\"value\":\"cert-create-certificate\"},{\"key\":\"sender\",\"value\":\"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx\"}]},{\"type\":\"transfer\",\"attributes\":[{\"key\":\"recipient\",\"value\":\"akash17xpfvakm2amg962yls6f84z3kell8c5lazw8j8\"},{\"key\":\"sender\",\"value\":\"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx\"},{\"key\":\"amount\",\"value\":\"5000uakt\"}]}]}]","logs":[{"msg_index":0,"log":"","events":[{"type":"message","attributes":[{"key":"action","value":"cert-create-certificate"},{"key":"sender","value":"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx"}]},{"type":"transfer","attributes":[{"key":"recipient","value":"akash17xpfvakm2amg962yls6f84z3kell8c5lazw8j8"},{"key":"sender","value":"akash1xmz9es9ay9ln9x2m3q8dlu0alxf0ltce7ykjfx"},{"key":"amount","value":"5000uakt"}]}]}],"info":"","gas_wanted":"200000","gas_used":"90954","tx":null,"timestamp":""}
16
Copied!

STEP7 - Configure Kubectl

If the provider is on a non-Kubernetes master node, kubectl and the kubeconfig file might not be present. In this step we will create the kubeconfig file on the provider host which is necessary when we try to start the provider.
Verify Kubeconfig File
  • On the provider host, verify if the kubeconfig file is present
  • We are looking for the presence of the .kube directory within the user’s home directory
1
cd ~
2
ls -al
Copied!
Example output of directory contents
  • In this example the .kube directory does not exist and we will need to create it
  • If the directory does exist and you are able to conduct kubectl commands (I.e. “kubectl get nodes”), feel free to skip forward to STEP8
2
total 56
3
drwx------ 8 root root 4096 Nov 8 21:23 .
4
drwxr-xr-x 19 root root 4096 Nov 1 14:53 ..
5
drwx------ 5 root root 4096 Nov 8 21:06 .akash
6
drwx------ 3 root root 4096 Nov 2 18:49 .ansible
7
-rw------- 1 root root 74 Nov 2 16:38 .bash_history
8
-rw-r--r-- 1 root root 3106 Dec 5 2019 .bashrc
9
drwx------ 2 root root 4096 Nov 2 16:38 .cache
10
-rw-r--r-- 1 root root 161 Dec 5 2019 .profile
11
drwx------ 2 root root 4096 Nov 1 14:53 .ssh
12
-rw------- 1 root root 7349 Nov 8 21:03 .viminfo
13
drwxr-xr-x 2 root root 4096 Nov 8 20:38 bin
14
-rw-r--r-- 1 root root 118 Nov 8 21:03 provider.yaml
15
drwxr-xr-x 4 root root 4096 Nov 1 14:53 snap
Copied!
Create a .kube Directory
1
mkdir .kube
Copied!
Copy Kubeconfig to the Provider
  • We will use the following command to copy the config file from the Kubernetes master to the provider host
  • Replace the username and IP address parts of the command
1
scp <username>@<ipaddress>:/root/.kube/config /root/.kube/config
Copied!
Install Kubectl on the Provider
1
stable=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)
2
3
curl -LO https://storage.googleapis.com/kubernetes-release/release/${stable}/bin/linux/amd64/kubectl
4
5
chmod +x ./kubectl
6
7
sudo mv ./kubectl /usr/local/bin/kubectl
Copied!
Verify Kubectl
  • Following the copy of the kubeconfig file and the kubectl install, you should be able to execute commands like “kubectl get nodes” as shown in example below
1
[email protected]:~# kubectl get nodes
2
NAME STATUS ROLES AGE VERSION
3
node1 Ready control-plane,master 6d2h v1.22.3
4
node2 Ready control-plane,master 6d2h v1.22.3
5
node3 Ready <none> 6d2h v1.22.3
Copied!

STEP8 - Start the Provider

In our final step the provider is started.
Kubernetes Domain
  • Create the environment variable of KUBERNETES_HOSTNAME
  • The variable will be used as the value for --cluster-public-hostname during provider start up and is the publicly accessible hostname of the Kubernetes cluster.
  • If multiple master nodes exist in the Kubernetes cluster, either the DNS record should point to the IP addresses of all master nodes or an alternative load balancing strategy should be used.
  • NOTE - within this guide --cluster-public-hostname (Kubernetes Cluster) and --deployment-ingress-domain (Ingress Controller) point to the same domain name but often the domains will be different.
1
export KUBERNETES_HOSTNAME=chainzeroakash.net
Copied!
Start Provider
1
akash provider run \
2
--home $AKASH_HOME \
3
--chain-id $AKASH_CHAIN_ID \
4
--node $AKASH_NODE \
5
--keyring-backend=file \
6
--from $AKASH_PROVIDER_KEY \
7
--fees 1000uakt \
8
--kubeconfig $KUBECONFIG \
9
--cluster-k8s true \
10
--deployment-ingress-domain $DEPLOYMENT_HOSTNAME \
11
--deployment-ingress-static-hosts true \
12
--bid-price-strategy scale \
13
--bid-price-cpu-scale 0.001 \
14
--bid-price-memory-scale 0.001 \
15
--bid-price-storage-scale 0.00001 \
16
--bid-price-endpoint-scale 0 \
17
--bid-deposit 5000000uakt \
18
--cluster-node-port-quantity 1000 \
19
--cluster-public-hostname $KUBERNETES_HOSTNAME
Copied!
Expected Output
  • When the provider starts the initial output should look like the following.
  • The full output is not displayed but only the first lines indicating a successful start
1
[email protected]:~# akash provider run --home $AKASH_HOME --chain-id $AKASH_CHAIN_ID --node $AKASH_NODE --keyring-backend=file --from $AKASH_PROVIDER_KEY --fees 1000uakt --kubeconfig $KUBECONFIG --cluster-k8s true --deployment-ingress-domain $DEPLOYMENT_HOSTNAME --deployment-ingress-static-hosts true --bid-price-strategy scale --bid-price-cpu-scale 0.001 --bid-price-memory-scale 0.001 --bid-price-storage-scale 0.00001 --bid-price-endpoint-scale 0 --bid-deposit 5000000uakt --cluster-node-port-quantity 1000 --cluster-public-hostname $KUBERNETES_HOSTNAME
2
3
Enter keyring passphrase:
4
Enter keyring passphrase:
5
6
I[2021-11-09|16:00:48.251] found leases module=provider-cluster cmp=service num-active=0
7
I[2021-11-09|16:00:48.251] found deployments module=provider-cluster cmp=service num-active=0 num-skipped=0
8
D[2021-11-09|16:00:48.251] inventory ready module=provider-cluster cmp=service cmp=inventory-service
9
D[2021-11-09|16:00:48.251] inventory fetched module=provider-cluster cmp=service cmp=inventory-service nodes=1
10
D[2021-11-09|16:00:48.251] node resources module=provider-cluster cmp=service cmp=inventory-service node-id=solo available-cpu="units:<val:\"5000\" > " available-memory="quantity:<val:\"34359738368\" > " available-storage="quantity:<val:\"549755813888\" > "
11
I[2021-11-09|16:00:49.982] syncing sequence cmp=client/broadcaster local=2 remote=2
12
I[2021-11-09|16:00:51.558] found orders module=bidengine-service count=109
13
D[2021-11-09|16:00:51.558] creating catchup order module=bidengine-service order=order/akash1057uu9jaehgqwk5g8g85nuq3esu0n2wxhejk9z/2200682/1/1
14
D[2021-11-09|16:00:51.558] creating catchup order module=bidengine-service order=order/akash109pttclfdj6erune0e8v9zed2pkvczq63u8yzp/3411023/1/1
15
D[2021-11-09|16:00:51.558] creating catchup order module=bidengine-service order=order/akash109pttclfdj6erune0e8v9zed2pkvczq63u8yzp/3411064/1/1
Copied!
Last modified 17d ago