-
Notifications
You must be signed in to change notification settings - Fork 24
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Feat: Add 0G Compute Network v0.1 doc (#28)
- Loading branch information
Showing
11 changed files
with
7,883 additions
and
22 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,31 +1,34 @@ | ||
--- | ||
id: 0g-serving | ||
title: 0G Serving | ||
id: 0g-compute | ||
title: 0G Compute Network | ||
--- | ||
|
||
# 0G Compute Network: Decentralized Inference & Beyond | ||
|
||
# 0G Serving: Decentralized Inference & Beyond | ||
--- | ||
In today's world, AI models are transforming industries, driving innovation, and powering new applications. But despite their increasing value, there's a critical gap in how AI services are delivered. Centralized platforms limit access, inflate costs, and restrict the flexibility of AI developers. That’s where 0G Serving comes in. | ||
|
||
## What is 0G Serving? | ||
In today's world, AI models are transforming industries, driving innovation, and powering new applications. Despite their growing value, there's a significant gap in how AI services are delivered. Centralized platforms often limit access, increase costs, and restrict the flexibility of AI developers. This is where the 0G Compute Network steps in to make a difference. | ||
|
||
0G Serving is our decentralized framework that supports AI model inference, data retrieval, and training tasks. | ||
## What is 0G Compute Network? | ||
|
||
The 0G Compute Network is a decentralized framework that provides AI computing capabilities to our community. It forms a crucial part of dAIOS and, together with the storage network, offers comprehensive end-to-end support for dApp development and services. | ||
|
||
The first iteration focuses specifically on decentralized settlement for inference, connecting buyers (who want to use AI models) with sellers (who run these models on their GPUs) in a trustless, transparent manner. Sellers, known as service providers, are able to set the price for each model they support and be rewarded real-time for their contributions. It’s a fully decentralized marketplace that eliminates the need for intermediaries, redefining how AI services are accessed and delivered and making them cheaper, more efficient, and accessible to anyone, anywhere. | ||
|
||
## How does it work? | ||
|
||
The 0G Serving contract facilitates secure interactions between users (AI buyers) and service providers (GPU owners running AI models), ensuring smooth data retrieval, fee collection, and service execution. Here’s how it works: | ||
The 0G Compute Network contract facilitates secure interactions between users (AI buyers) and service providers (GPU owners running AI models), ensuring smooth data retrieval, fee collection, and service execution. Here’s how it works: | ||
|
||
1. **Service Provider Registration:** Service providers first register the type of AI service they offer (e.g., model inference) and set pricing for each type within the smart contract. | ||
2. **User Pre-deposits Fees:** When a user wants to access a service, they pre-deposit a fee into the smart contract associated with the selected service provider. This ensures that funds are available to compensate the service provider. | ||
3. **Request and Response System:** Users send requests for AI inference, and the service provider decides whether to respond based on the sufficiency of the user’s remaining balance. Both the user and the provider sign each request and response, ensuring trustless verification of transactions. | ||
|
||
Here are some of the key features of the system: | ||
- **Open Access with Fair Rewards:** Anyone with the right hardware can become a service provider and earn fair compensation for running AI models. This open-access, decentralized structure enables a global network of contributors, where providers are directly rewarded for their computational resources and services, fostering a new ecosystem of decentralized AI. | ||
- **Optimized Efficiency:** 0G Serving uses a variety of different mechanisms to minimize costs and maximize performance. Service providers can batch-process multiple user requests to minimize the number of on-chain settlements, optimizing transaction costs and network efficiency. ZK-proofs are used to compress transaction data, lowering on-chain settlement costs. Additionally, to reduce the on-chain costs of storing request traces with data keys, 0G Storage allows for scalable off-chain data management, enabling more efficient storage and retrieval while keeping costs low. | ||
- **User-Centric Design:** The platform offers a smooth user experience, with a built-in refund mechanism that ensures users can reclaim unused funds within a clearly defined time window. This process is executed by smart contracts, ensuring a reliable, secure, and frictionless process for both service providers and users. | ||
|
||
By decentralizing both services and settlement, 0G Serving provides a scalable and trustless alternative to centralized AI platforms. | ||
- **Open Access with Fair Rewards:** Anyone with the right hardware can become a service provider and earn fair compensation for running AI models. This open-access, decentralized structure enables a global network of contributors, where providers are directly rewarded for their computational resources and services, fostering a new ecosystem of decentralized AI. | ||
- **Optimized Efficiency:** 0G Compute Network uses a variety of different mechanisms to minimize costs and maximize performance. Service providers can batch-process multiple user requests to minimize the number of on-chain settlements, optimizing transaction costs and network efficiency. ZK-proofs are used to compress transaction data, lowering on-chain settlement costs. Additionally, to reduce the on-chain costs of storing request traces with data keys, 0G Storage allows for scalable off-chain data management, enabling more efficient storage and retrieval while keeping costs low. | ||
- **User-Centric Design:** The platform offers a smooth user experience, with a built-in refund mechanism that ensures users can reclaim unused funds within a clearly defined time window. This process is executed by smart contracts, ensuring a reliable, secure, and frictionless process for both service providers and users. | ||
|
||
By decentralizing both services and settlement, 0G Compute Network provides a scalable and trustless alternative to centralized AI platforms. | ||
|
||
Over time, we aim to decentralize the entire AI workflow—from inference to data and training—by keeping everything on-chain and autonomous. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
109 changes: 109 additions & 0 deletions
109
docs/build-with-0g/compute-network/data/ts-sdk-example.ts
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
import { ethers } from "ethers"; | ||
import { createZGServingNetworkBroker } from "@0glabs/0g-serving-broker"; | ||
import OpenAI from "openai"; | ||
|
||
async function main() { | ||
const provider = new ethers.JsonRpcProvider("https://evmrpc-testnet.0g.ai"); | ||
|
||
// Step 1: Create a wallet with a private key | ||
const privateKey = | ||
"Please input your private key, and make sure it has enough testnet 0GAI token"; | ||
const wallet = new ethers.Wallet(privateKey, provider); | ||
|
||
// Step 2: Initialize the broker | ||
try { | ||
const broker = await createZGServingNetworkBroker(wallet); | ||
|
||
// Step 3: List available services | ||
console.log("Listing available services..."); | ||
const services = await broker.listService(); | ||
services.forEach((service: any) => { | ||
console.log( | ||
`Service: ${service.name}, Provider: ${service.provider}, Type: ${service.serviceType}, Model: ${service.model}, URL: ${service.url}` | ||
); | ||
}); | ||
|
||
// Step 3.1: Select a service | ||
const service = services.find( | ||
(service: any) => service.name === "Please input the service name" | ||
); | ||
if (!service) { | ||
console.error("Service not found."); | ||
return; | ||
} | ||
const providerAddress = service.provider; | ||
|
||
// Step 4: Manage Accounts | ||
const initialBalance = 0.00000001; | ||
// Step 4.1: Create a new account | ||
console.log("Creating a new account..."); | ||
await broker.addAccount(providerAddress, initialBalance); | ||
console.log("Account created successfully."); | ||
|
||
// Step 4.2: Deposit funds into the account | ||
const depositAmount = 0.00000002; | ||
console.log("Depositing funds..."); | ||
await broker.depositFund(providerAddress, depositAmount); | ||
console.log("Funds deposited successfully."); | ||
|
||
// Step 4.3: Get the account | ||
const account = await broker.getAccount(providerAddress); | ||
console.log(account); | ||
|
||
// Step 5: Use the Provider's Services | ||
console.log("Processing a request..."); | ||
const serviceName = service.name; | ||
const content = "Please input your message here"; | ||
|
||
// Step 5.1: Get the request metadata | ||
const { endpoint, model } = await broker.getServiceMetadata( | ||
providerAddress, | ||
serviceName | ||
); | ||
|
||
// Step 5.2: Get the request headers | ||
const headers = await broker.getRequestHeaders( | ||
providerAddress, | ||
serviceName, | ||
content | ||
); | ||
|
||
// Step 6: Send a request to the service | ||
const openai = new OpenAI({ | ||
baseURL: endpoint, | ||
apiKey: "", | ||
}); | ||
const completion = await openai.chat.completions.create( | ||
{ | ||
messages: [{ role: "system", content }], | ||
model: model, | ||
}, | ||
{ | ||
headers: { | ||
...headers, | ||
}, | ||
} | ||
); | ||
|
||
const receivedContent = completion.choices[0].message.content; | ||
const chatID = completion.id; | ||
if (!receivedContent) { | ||
throw new Error("No content received."); | ||
} | ||
console.log("Response:", receivedContent); | ||
|
||
// Step 7: Process the response | ||
console.log("Processing a response..."); | ||
const isValid = await broker.processResponse( | ||
providerAddress, | ||
serviceName, | ||
receivedContent, | ||
chatID | ||
); | ||
console.log(`Response validity: ${isValid ? "Valid" : "Invalid"}`); | ||
} catch (error) { | ||
console.error("Error during execution:", error); | ||
} | ||
} | ||
|
||
main(); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
--- | ||
id: marketplace | ||
title: Marketplace | ||
sidebar_position: 4 | ||
--- | ||
|
||
Coming soon |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
--- | ||
id: overview | ||
title: Overview | ||
sidebar_position: 1 | ||
--- | ||
|
||
The 0G Compute Network connects AI users and AI service providers, making it easy for AI users to access a wide range of compute and model services. As part of this, the framework is built to provide permissionless settlement between AI users and AI service providers in a fast and trustworthy manner, as required by a fully distributed AI economy. | ||
|
||
We integrate various stages of the AI process to make these services both verifiable and billable. This ensures that Service Providers, such as platforms or users offering compute resources, can deliver trusted and accountable solutions. | ||
|
||
![architecture](./architecture.png) | ||
|
||
## Components | ||
|
||
**Contract:** This component determines the legitimacy of settlement proofs, manages accounts, and handles service information. To do this, it stores variables during the service process, including account information, service details (such as name and URL), and consensus logic. | ||
|
||
**Provider:** The owners of AI models and hardware who offer their services for a fee. | ||
|
||
**User:** Individuals or organizations who use the services listed by Service Providers. They may also use AI services directly or build applications on top of our API. | ||
|
||
## Process Overview | ||
|
||
The 0G Compute Network implements the following workflow: | ||
|
||
1. **Service Registration:** Providers register their services' types, URLs, and prices in the smart contract. | ||
2. **Fee Staking:** Users deposit a certain amount into the smart contract for service fees. If the accumulated charges from user requests exceed their deposit, the provider will stop responding. | ||
3. **Request Submission:** Users or developers send requests, along with metadata and signatures, to the Service Provider. | ||
4. **Provider Response:** Providers respond based on the user's balance and the request's validity. | ||
5. **Settlement and Verification:** Providers generate a [zero-knowledge proof (ZK-proof)](https://github.com/0glabs/0g-zk-settlement-server?tab=readme-ov-file) and submit it to the smart contract for verification and settlement. | ||
6. **User Verification:** Users verify the Service Provider's response and can stop requests if the verification fails. | ||
|
||
This brief overview introduces the foundational workflow. For more detailed steps, please refer to the full documentation. | ||
|
||
## Get Involved | ||
|
||
If you're interested in becoming a **Service Provider**, please refer to [the Provider section](./provider.md) for detailed guidelines and requirements. | ||
|
||
If you wish to leverage provider services to develop your own projects, relevant resources are available in [the Developer SDK section](./developer-sdk). | ||
|
||
For those looking to use the 0G Compute Network to access AI services, more information can be found in [the Marketplace section](./marketplace.md). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,129 @@ | ||
--- | ||
id: provider | ||
title: Becoming a Service Provider | ||
sidebar_position: 2 | ||
--- | ||
|
||
import Tabs from '@theme/Tabs'; | ||
import TabItem from '@theme/TabItem'; | ||
|
||
To integrate your AI services into the 0G Compute Network and become a Service Provider, you must first transform your service into a verifiable service and connect it through the provider broker container. | ||
|
||
This is easy to do, and we will provide a walkthrough below. | ||
|
||
## Verifiable Services | ||
|
||
### Service Interface Requirements | ||
|
||
Large Language Models (LLMs) revolutionize communication, knowledge access, and automation by generating human-like text, so we start with supporting language models. To have a consistent experience, providers shall support the [OpenAI API Interface Standards](https://platform.openai.com/docs/api-reference/chat). | ||
|
||
### Verification Interfaces | ||
|
||
To ensure the integrity and trustworthiness of services, different verification mechanisms are employed. Each mechanism comes with its own specific set of protocols and requirements to ensure service verification and security. | ||
|
||
<Tabs> | ||
<TabItem value="TEEML" label="TEEML" default> | ||
|
||
For TEE (Trusted Execution Environment) verification, when a service starts, it should generate a signing key within the TEE. We require CPU and GPU attestations to ensure the service is running in a Confidential VM with an NVIDIA GPU in TEE mode. These attestations should include the public key of the signing key, verifying its creation within the TEE. All inference results must be signed with this signing key. | ||
|
||
_Note_: Ensure that Intel TDX (Trusted Domain Extensions) is enabled on the CPU. Additionally, an H100 or H200 GPU is required for GPU TEE. | ||
|
||
#### 1. Attestation Download Interface | ||
|
||
To facilitate attestation downloads, set up an API endpoint at: | ||
|
||
``` | ||
GET https://{PUBLIC_IP}/attestation/report | ||
``` | ||
|
||
This endpoint should return a JSON structure in the following format: | ||
|
||
```json | ||
{ | ||
"signing_address": "...", | ||
"nvidia_payload": "..." | ||
} | ||
``` | ||
|
||
_Note_: Ensure that the "nvidia_payload" can be verified using NVIDIA's GPU Attestation API. Support for decentralized TEE attestation is planned for the future, and relevant interfaces will be provided. Stay tuned. | ||
|
||
#### 2. Signature Download Interface | ||
|
||
To facilitate the downloading of response signatures, provide an API endpoint at: | ||
|
||
``` | ||
GET https://{PUBLIC_IP}/signature/{response_id} | ||
``` | ||
|
||
Each response should include a unique ID that can be utilized to retrieve its signature using the above endpoint. | ||
|
||
- **Signature Generation**: Ensure the signature is generated using the ECDSA algorithm. | ||
- **Verification**: The signature should be verifiable with the signing address, along with the corresponding request and response content. | ||
|
||
</TabItem> | ||
|
||
<TabItem value="OPML_ZKML_and_others" label="OPML, ZKML, and others"> | ||
Coming soon | ||
</TabItem> | ||
|
||
</Tabs> | ||
|
||
## Provider Broker | ||
|
||
To register and manage services, handle user request proxies, and perform settlements, you need to use the Provider Broker. | ||
|
||
### Prerequisites | ||
|
||
- Docker Compose: 1.27+ | ||
|
||
### Download the Installation Package | ||
|
||
Please visit the [releases page](https://github.com/0glabs/0g-serving-broker/releases) to download and extract the latest version of the installation package. | ||
|
||
### Configuration Setup | ||
|
||
- Copy the `config.example.yaml` file. | ||
- Modify `servingUrl` to point to your publicly exposed URL. | ||
- Set `privateKeys` to your wallet's private key for the 0G blockchain. | ||
- Save the file as `config.local.yaml`. | ||
- Replace `#PORT#` in `docker-compose.yml` with the port you want to use. It should be the same as the port of `servingUrl` in `config.local.yaml`. | ||
|
||
### Start the Provider Broker | ||
|
||
```bash | ||
docker compose -f docker-compose.yml up -d | ||
``` | ||
|
||
### Key Commands | ||
|
||
1. **Register the Service** | ||
|
||
The compute network currently supports `chatbot` services. Additional services are in the pipeline to be released soon. | ||
|
||
```bash | ||
curl -X POST http://127.0.0.1:<PORT>/v1/service \ | ||
-H "Content-Type: application/json" \ | ||
-d '{ | ||
"URL": "<endpoint_of_the_prepared_service>", | ||
"inputPrice": "10000000", | ||
"outputPrice": "20000000", | ||
"Type": "chatbot", | ||
"Name": "llama8Bb", | ||
"Model": "llama-3.1-8B-Instruct", | ||
"verifiability":"TeeML" | ||
}' | ||
``` | ||
|
||
- `inputPrice` and `outputPrice` vary by service type, for `chatbot`, they represent the cost per token. The unit is in neuron. 1 A0GI = 1e18 neuron. | ||
|
||
2. **Settle the Fee** | ||
|
||
```bash | ||
curl -X POST http://127.0.0.1:<PORT>/v1/settle | ||
``` | ||
|
||
- The provider broker has an automatic settlement engine that ensures you can collect fees promptly before your customer's account balance is insufficient, while also minimizing the frequency of charges to reduce gas consumption. | ||
|
||
### Additional API Information | ||
|
||
For more details, please refer to the <a href="/html/compute-network-provider-api.html" target="_blank" rel="noopener noreferrer">API Page</a>. |
Oops, something went wrong.