Datacenter deploy workloads and allow connectivity as specified by the tenant.
A tenant can close any active deployment at any time
A tenant hosting an application on the Akash network
Each datacenter will host an agent which is a mediator between the with the Akash Network and datecenter-local infrastructure.
The datacenter agent is responsible for
Bidding on orders fulfillable by the datacenter.
Managing managing active leases it is a provider for.
A Akash Node that is elected to be a validator in the DPoS consensus scheme.
Marketplace facilitators maintain the distributed exchange (marketplace). Validators will initially perform this function.
Number of vCPUs
Amount of memory in GB
Amount of block storage in GB
Deployment represents the state of a tenant's application. It includes desired infrastructure and pricing parameters, as well as workload definitions and connectivity.
DeploymentInfrastructure represents a set of resources (including pricing) that a tenant would like to be provisioned in a single datacenter. orders are created from deployment infrastructure as necessary.
resources list, resource group fields are interpreted as follows:
collateral fields must be the same.
ID of lease being confirmed
Sent by a tenant to update their application on Akash.
Sent by a tenant to close their application on Akash.
Sent by a facilitator after the deployments datacenter's confirm the deployments' resources have been reset.
For each order that is ready to be fulfilled (
wait-duration has transpired):
Find the matching fulfillment with the lowest price.
For each active lease that has not been confirmed in reconfirmation-period:
For each lease currently provided by the datacenter:
SubmitLeaseConfirmation event for the lease.
For each open order:
If the datacenter is out of collateral, exit.
Once resources have been procured, clients must distribute their workloads to providers so that they can execute on the leased resources. We refer to the current state of the client’s workloads on the Akash Network as a "deployment".
A tenant describes their desired deployment in a "manifest". The manifest contains workload definitions, configuration, and connection rules. Providers use workload definitions and configuration to execute the workloads on the resources they’re providing, and use the connection rules to build an overlay network and firewall configurations.
A hash of the manifest is known as the deployment "version" and is stored on the blockchain-based distributed database.
Stack infrastructure is submitted to the ledger.
Ask orders are generated for resources defined in the stack infrastructure.
Providers (data centers) bid on orders.
Leases are reached by matching bid and ask orders.
Stack manifest is distributed to deployment data centers (lease providers).
Datacenters deploy workloads and distribute connection parameters to all other deployment datacenters.
Overlay network is established to allow for connectivity between workloads.
Tenant closes the deployment
Datacenters receive the close event
Each on-chain deployment contains a hash of the manifest. This hash represents the deployment version.
The manifest contains sensitive information which should only be shared with participants of the deployment. This poses a problem for self-managed deployments - Akash must distribute the workload definition autonomously, without revealing its contents to unnecessary participants.
To address these issues, we devised a peer-to-peer file sharing scheme in which lease participants distribute the manifest to one another as needed. The protocol runs off-chain over a TLS connection; each participant can verify the manifest they received by computing its hash and comparing this with the deployment version that is stored on the blockchain-backed distributed database.
In addition to providing private, secure, autonomous manifest distribution, the peer-to-peer protocol also enables fast distribution of large manifests to a large number of datacenters.
By default, a workload’s network is isolated - nothing can connect to it. While this is secure, it is not practical for real-world applications. For example, consider a simple web application: end-tenant browsers should have access to the web tier workload, and the web tier needs to communicate to the database workload. Furthermore, the web tier may not be hosted in the same datacenter as the database.
On the Akash Network, clients can selectively allow communications to and between workloads by defining a connection topology within the manifest. Datacenters use this topology to configure firewall rules and to create a secure network between individual workloads as needed.
To support secure cross-datacenter communications, providers expose workloads to each other through a mTLS tunnel. Each workload-to-workload connection uses a distinct tunnel.
Before establishing these tunnels, providers generate a TLS certificate for each required tunnel and exchange these certificates with the necessary peer providers. Each provider’s root certificate is stored on the blockchain-based distributed database, enabling peers to verify the authenticity of the certificates it receives.
Once certificates are exchanged, providers establish an authenticated tunnel and connect the workload’s network to it. All of this is transparent to the workloads themselves - they can connect to one another through stable addresses and standard protocols.
Deployments are closed by tenants and consist of two states. The first
Closing state will notify the datacenter so that allocated resources can be deallocated. The tenant will not be billed for leases in the
Closing state. Once the datacenter has deallocated and made it's resources available again the previous lease will be se to the
A stack is a description of all components necessary to deploy an application on the Akash Network.
A stack includes:
Manifest of workloads to deploy on procured infrastructure.
A manifest describes workloads and how they should be deployed.
A manifest includes:
Workloads to be executed.
Data center placement for each workload.
Connectivity rules describing which entities are allowed to connect to each workload.
A deployment represents the current state of a stack as fulfilled by the Akash Network.
Infrastructure procured via the cloud exchange (leases).
Manifest distribution state.
Overlay network state.
The dynamic nature of cloud infrastructure is both a blessing and a curse for operations management. That new resources can be provisioned at will is a blessing; the exploding management overhead and complexity of said resources is a curse. The goal of DevOps -- the practice of managing deployments programmatically -- is to alleviate the pain points of cloud infrastructure by leveraging its strengths.
The Akash Network was built from the ground up to provide DevOps engineers with a simple but powerful toolset for creating highly-automated deployments. The toolset is comprised of the primitives that enable non-management applications -- generic workloads and overlay networks -- and can be leveraged to create autonomous, self-managed systems.
Self-managed deployments on Akash are a simple matter of creating workloads that manage their own deployment themselves. A DevOps engineer may employ a workload that updates DNS entries as providers join or leave the deployment; tests response times of web tier applications; and scales up and down infrastructure (in accordance with permissions and constraints defined by the client) as needed based on any number of input metrics. The "management tier" may be spread across all datacenters for a deployment, with global state maintained by a distributed database running over the secure overlay network.
Many web-based applications are "latency-sensitive" - lower response times from application servers translates into a dramatically improved end-tenant experience. Modern deployments of such applications employ content delivery networks (CDNs) to deliver static content such as images to end tenants quickly.
CDNs provide reduced latency by distributing content so that it is geographically close to the tenants that are accessing it. Deployments on the Akash Network can not only replicate this approach, but beat it - Akash gives clients the ability to place dynamic content close to an application’s tenants.
To implement a self-managed "dynamic delivery network" on Akash, a DevOps engineer would include a management tier in their deployment which monitors the geographical location of clients. This management tier would add and remove datacenters across the globe, provisioning more resources in regions where tenant activity is high, and less resources in regions where tenant participation is low.
Machine Learning Deployment
Machine learning applications employ a large number of nodes to parallelize computations involving large datasets. They do their work in "batches" - there is no "steady state" of capacity that is required.
A machine learning application on Akash may use a management tier to proactively procure resources within a single datacenter. As a machine learning task begins, the management tier can "scale up" the number of nodes for it; when a task completes, the resources provisioned for it can be relinquished.
Off-chain event bus: implemented as marketplace service
Telemetry data via off-chain event bus