AI Precompile ABIs
I made Citrate to extend the standard set of Ethereum precompiled contracts with AI-native operations. I love precompiles because they make operations cheaper in gas. Precompiles are special contracts deployed at fixed addresses that execute native code instead of EVM bytecode, providing us big gas savings for computationally intensive operations.
Standard Ethereum Precompiles
Citrate includes all nine standard Ethereum precompiles at their canonical addresses.
| Address | Name | Description | Base Gas Cost |
|---|---|---|---|
0x01 | ECRECOVER | Elliptic curve public key recovery | 3,000 |
0x02 | SHA256 | SHA-256 hash function | 60 + 12/word |
0x03 | RIPEMD160 | RIPEMD-160 hash function | 600 + 120/word |
0x04 | IDENTITY | Data copy (identity function) | 15 + 3/word |
0x05 | MODEXP | Modular exponentiation | dynamic |
0x06 | ECADD | BN256 elliptic curve point addition | 150 |
0x07 | ECMUL | BN256 elliptic curve scalar multiplication | 6,000 |
0x08 | ECPAIRING | BN256 elliptic curve pairing check | 45,000 + 34,000/pair |
0x09 | BLAKE2F | BLAKE2b compression function | 1/round |
Calling Standard Precompiles
We call Standard precompiles by using STATICCALL or CALL to their fixed address. For example, to recover a signer from a signature using ECRECOVER at 0x01:
// Solidity example: ECRECOVER
(bool success, bytes memory result) = address(0x01).staticcall(
abi.encodePacked(hash, v, r, s)
);
address signer = abi.decode(result, (address));
The input to 0x01 (ECRECOVER) is 128 bytes: the 32-byte message hash, the 32-byte v value (padded), the 32-byte r value, and the 32-byte s value. The output is the 32-byte recovered address (left-padded with zeros).
Citrate AI Precompiles
Our custom AI precompiles occupy the address range 0x0100 through 0x0106. These precompiles provide native execution for AI model operations, enabling gas-efficient on-chain inference and model management.
0x0100 ... MODEL_DEPLOY
MODEL_DEPLOY Registers a new AI model in the on-chain model registry. The model weights are stored off-chain (IPFS or Arweave) while the metadata and content hash are stored on-chain.
Input (ABI-encoded):
function deployModel(
string name, // Model name (max 64 bytes)
bytes32 modelHash, // SHA-256 hash of the model weights
string storageUri, // IPFS or Arweave URI for model weights
string format, // Model format: "onnx", "gguf", "safetensors"
bytes inputSchema, // JSON schema for model input
bytes outputSchema // JSON schema for model output
) returns (bytes32 modelId)
Gas formula:
gas = 100,000 + (input_size_bytes * 16)
Output: 32-byte model ID derived from keccak256(sender, name, modelHash, block.number).
0x0101 ... MODEL_INFERENCE
We use MODEL_INFERENCE to execute a single inference call against a registered model. The precompile loads the model from its storage URI, runs the inference in a sandboxed WASM runtime, and returns the output.
Input (ABI-encoded):
function runInference(
bytes32 modelId, // ID of the registered model
bytes input, // ABI-encoded input conforming to the model's input schema
bool generateProof // Whether to generate a ZK proof of the inference
) returns (bytes output, bytes proof)
Gas formula:
gas = 50,000 + (model_params * 0.001) + (input_size_bytes * 8)
// If generateProof is true, add 200,000 gas
Output: ABI-encoded tuple of (bytes output, bytes proof). If generateProof is false, the proof field is empty.
0x0102 ... BATCH_INFERENCE
We use BATCH_INFERENCE to execute multiple inference calls in a single precompile invocation. Batching amortizes the model loading cost across multiple inputs.
Input (ABI-encoded):
function batchInference(
bytes32 modelId, // ID of the registered model
bytes[] inputs, // Array of ABI-encoded inputs
bool generateProofs // Whether to generate ZK proofs for each inference
) returns (bytes[] outputs, bytes[] proofs)
Gas formula:
gas = 50,000 + (model_params * 0.001) + (batch_size * input_size_bytes * 6)
// If generateProofs is true, add 200,000 * batch_size gas
// Maximum batch size: 32
Output: ABI-encoded tuple of (bytes[] outputs, bytes[] proofs).
0x0103 ... MODEL_METADATA
MODEL_METADATA returns on-chain metadata for a registered model. This is a read-only operation with no state changes.
Input (ABI-encoded):
function getModelMetadata(
bytes32 modelId // ID of the registered model
) returns (
string name,
bytes32 modelHash,
string storageUri,
string format,
address owner,
uint256 deployBlock,
uint256 inferenceCount,
bytes inputSchema,
bytes outputSchema
)
Gas formula:
gas = 2,600 // Fixed cost (SLOAD equivalent)
0x0104 ... PROOF_VERIFY
We use PROOF_VERIFY to Verify a zero-knowledge proof of an inference result. This is used by on-chain verifier contracts to confirm that a model output was produced correctly without re-executing the inference.
Input (ABI-encoded):
function verifyProof(
bytes proof, // The ZK proof bytes
bytes32 modelId, // The model that produced the output
bytes32 inputHash, // Hash of the inference input
bytes32 outputHash // Hash of the inference output
) returns (bool valid)
Gas formula:
gas = 200,000 + (proof_size_bytes * 16)
Output: A single boolean indicating whether the proof is valid.
0x0105 ... MODEL_BENCHMARK
Runs a standardized benchmark suite against a registered model and returns performance metrics. Benchmark results are stored on-chain and used by the mentorship protocol to rank model providers.
Input (ABI-encoded):
function benchmarkModel(
bytes32 modelId, // ID of the registered model
bytes benchmarkSuite // Identifier for the benchmark suite to run
) returns (
uint256 latencyMs, // Average inference latency in milliseconds
uint256 throughput, // Inferences per second
uint256 accuracy, // Accuracy score (basis points, 0-10000)
bytes32 resultHash // Hash of the full benchmark results
)
Gas formula:
gas = 500,000 + (model_params * 0.01)
0x0106 ... MODEL_ENCRYPTION
I think privacy is paramount but should remain optional, MODEL_ENCRYPTION encrypts or decrypts model weights using the node's secure enclave. This precompile is used for confidential model deployment where weights must remain private.
Input (ABI-encoded):
function encryptModel(
bytes modelWeights, // Raw model weights
bytes32 encryptionKey, // Public key of the intended recipient
bool isEncrypt // true for encrypt, false for decrypt
) returns (bytes result)
Gas formula:
gas = 100,000 + (input_size_bytes * 32)
Output: The encrypted or decrypted model weights.
Calling AI Precompiles from Solidity
Here is an example of calling the MODEL_INFERENCE precompile from a Solidity contract:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
contract CitrateInference {
address constant MODEL_INFERENCE = address(0x0101);
function runInference(
bytes32 modelId,
bytes calldata input
) external returns (bytes memory output, bytes memory proof) {
(bool success, bytes memory result) = MODEL_INFERENCE.call(
abi.encode(modelId, input, true)
);
require(success, "Inference failed");
(output, proof) = abi.decode(result, (bytes, bytes));
}
}
Gas Estimation
When estimating gas for AI precompile calls, use eth_estimateGas with the precompile address as the to field and the ABI-encoded parameters as data. The estimate includes both the precompile execution cost and any overhead from the calling contract.
{
"jsonrpc": "2.0",
"method": "eth_estimateGas",
"params": [{
"to": "0x0000000000000000000000000000000000000101",
"data": "0x...abi_encoded_params"
}],
"id": 1
}
Error Handling
AI precompile calls revert with specific error selectors when execution fails:
| Error | Selector | Meaning |
|---|---|---|
ModelNotFound() | 0x4c4e5c01 | The model ID does not exist in the registry |
InvalidInput() | 0x8baa579f | Input does not conform to the model schema |
InferenceFailed() | 0x2d7c1233 | Runtime error during model execution |
ProofGenerationFailed() | 0x9a1f2740 | ZK proof generation encountered an error |
BatchSizeExceeded() | 0xb3c5e100 | Batch size exceeds the maximum of 32 |
InsufficientGas() | 0x6a125670 | Not enough gas provided for the operation |