Building dapps

Registering a Model

We designed the model registration process to be as simple as possible while ensuring quality. Registering a model on Citrate makes it discoverable to smart contracts and dApp developers across the network. Once registered, your model appears in the on-chain ModelRegistry, and any contract can request inference from it through the InferenceEngine precompile. This guide walks through the full registration process from model preparation to verification.

Step 1: Prepare Model Weights

Before registering on-chain, you need to make your model weights available at a content-addressable location. Citrate uses IPFS CIDs as the canonical content hash for model weights, ensuring that the on-chain reference points to an immutable artifact.

Export your model to a supported format:

# Convert a PyTorch model to ONNX
python -c "
import torch
from my_model import SentimentModel
 
model = SentimentModel()
model.load_state_dict(torch.load('sentiment_v1.pt'))
model.eval()
 
dummy_input = torch.randn(1, 512)
torch.onnx.export(model, dummy_input, 'sentiment_v1.onnx',
                   input_names=['input'],
                   output_names=['output'],
                   dynamic_axes={'input': {0: 'batch_size'}})
"
 
# Or export to safetensors format
python -c "
from safetensors.torch import save_file
import torch
 
weights = torch.load('sentiment_v1.pt')
save_file(weights, 'sentiment_v1.safetensors')
"

Upload the exported model to IPFS:

Using the IPFS CLI:

ipfs add sentiment_v1.onnx

This returns a content hash like QmXyz.... Alternatively, use a pinning service:

curl -X POST "https://api.pinata.cloud/pinning/pinFileToIPFS" -H "Authorization: Bearer $PINATA_JWT" -F "file=@sentiment_v1.onnx"

Supported formats are ONNX (.onnx), SafeTensors (.safetensors), and GGUF (.gguf) for large language models. The network validates the format during registration.

Step 2: Create Model Metadata

Prepare a metadata JSON file that describes your model's capabilities, input/output schema, and resource requirements. This metadata is stored alongside the on-chain registration and helps consumers understand what your model does.

{
  "name": "sentiment-v1",
  "version": "1.0.0",
  "description": "Sentiment analysis model fine-tuned on financial news corpus",
  "category": "nlp/classification",
  "input_schema": {
    "type": "object",
    "properties": {
      "text": { "type": "string", "maxLength": 4096 }
    }
  },
  "output_schema": {
    "type": "object",
    "properties": {
      "sentiment": { "type": "string", "enum": ["positive", "negative", "neutral"] },
      "confidence": { "type": "number", "minimum": 0, "maximum": 1 }
    }
  },
  "compute_requirements": {
    "min_vram_gb": 4,
    "estimated_latency_ms": 50,
    "max_batch_size": 32
  }
}

Upload this metadata to IPFS as well, and note the CID.

Step 3: Call ModelRegistry.register()

With your weights and metadata hosted, you can register the model on-chain. This requires a SALT stake bond that serves as quality collateral -- if your model consistently produces poor results or goes offline, a portion of this bond can be slashed.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
 
interface IModelRegistry {
    function register(
        string calldata name,
        string calldata contentHash,
        string calldata endpoint,
        uint256 inferencePrice,
        uint256 stakeBond
    ) external returns (bytes32 modelId);
}
 
contract ModelRegistrar {
    IModelRegistry constant registry = IModelRegistry(address(0x0100));
 
    function registerMyModel() external payable returns (bytes32) {
        return registry.register{value: msg.value}(
            "sentiment-v1",
            "QmXyzContentHashOfModelWeights",
            "https://inference.mynode.io/sentiment-v1",
            0.001 ether,    // 0.001 SALT per inference
            10 ether         // 10 SALT stake bond
        );
    }
}

You can also register via the Citrate CLI:

citrate-cli model register --name "sentiment-v1" --content-hash "QmXyzContentHashOfModelWeights" --endpoint "https://inference.mynode.io/sentiment-v1" --price 0.001 --stake 10 --rpc https://testnet-rpc.cnidarian.cloud --private-key $PRIVATE_KEY

Step 4: Set Inference Pricing

Inference pricing is set during registration but can be updated afterward. The price represents the SALT cost per inference request and should reflect your compute costs plus a margin. The network enforces a minimum price floor to prevent race-to-the-bottom dynamics.

Pricing considerations:

  • Compute cost: GPU-hours consumed per inference, amortized across expected request volume
  • Bandwidth: Data transfer costs for input/output payloads
  • Stake opportunity cost: The SALT locked as collateral earns no staking rewards
  • Market rate: Check comparable models on the ModelRegistry to price competitively

Update pricing after registration:

citrate-cli model update-price --model-id 0xYOUR_MODEL_ID --new-price 0.0015 --rpc https://testnet-rpc.cnidarian.cloud --private-key $PRIVATE_KEY

Step 5: Verify Registration

After the registration transaction confirms, verify that your model appears correctly in the registry:

Query the model by ID:

citrate-cli model info --model-id 0xYOUR_MODEL_ID --rpc https://testnet-rpc.cnidarian.cloud

Or use cast to call the precompile directly:

cast call 0x0100 "getModel(bytes32)(string,address,string,uint256,uint256,uint256)" 0xYOUR_MODEL_ID --rpc-url https://testnet-rpc.cnidarian.cloud

I'd suggest testing on testnet first before committing real SALT on mainnet. You should see your model name, owner address, endpoint, price, stake, and an initial reputation score of zero. The reputation score increases as your model successfully serves inference requests and receives positive attestations from consumers.

To confirm your model is serving correctly, submit a test inference request:

citrate-cli inference request --model 0xYOUR_MODEL_ID --input '{"text": "Testing model registration on Citrate"}' --rpc https://testnet-rpc.cnidarian.cloud

Further Reading