Building with MCP
The Model Context Protocol (MCP) is Citrate's framework for AI model interoperability. We created MCP because the on-chain precompiles handle individual inference requests well, but developers need higher-level abstractions for multi-model pipelines, context sharing, and agent orchestration. MCP operates as a hybrid on-chain/off-chain protocol, with discovery and payment on-chain and data flow optimized through off-chain channels.
Discovering Models
MCP extends the on-chain ModelRegistry with rich metadata queries. Beyond basic model lookup, MCP provides semantic search, capability matching, and compatibility checks that help developers find the right model for their use case.
import { CitrateMCP } from "@citrate/mcp-sdk";
const mcp = new CitrateMCP({
rpcUrl: "https://rpc.cnidarian.cloud",
mcpGateway: "https://mcp.cnidarian.cloud",
});
// Discover models by capability
const models = await mcp.discover({
category: "nlp/sentiment",
minReputation: 7500, // Minimum 75% reputation score
maxLatencyMs: 100,
maxPricePerRequest: "0.002", // Max 0.002 SALT
});
console.log(`Found ${models.length} matching models`);
for (const model of models) {
console.log(` ${model.name} ... rep: ${model.reputation}, price: ${model.price} SALT`);
}
Discovery queries are resolved off-chain through the MCP gateway but validate against on-chain state. This means you always get fresh availability data without paying gas for read queries.
Composing Multi-Model Pipelines
One of MCP's most powerful features is pipeline composition. You can chain multiple models together where the output of one becomes the input of the next, with context preserved across steps.
// Define a multi-step analysis pipeline
const pipeline = mcp.createPipeline("document-analysis");
// Step 1: Extract key entities from the document
pipeline.addStep({
model: "entity-extractor-v2",
inputMapping: (input) => ({ text: input.document }),
outputKey: "entities",
});
// Step 2: Classify sentiment for each entity mention
pipeline.addStep({
model: "sentiment-v1",
inputMapping: (prev) => ({
text: prev.document,
entities: prev.entities,
}),
outputKey: "sentiments",
});
// Step 3: Generate a summary incorporating entity sentiments
pipeline.addStep({
model: "summarizer-v3",
inputMapping: (prev) => ({
text: prev.document,
entities: prev.entities,
sentiments: prev.sentiments,
}),
outputKey: "summary",
});
// Execute the pipeline
const result = await pipeline.execute({
document: "The Cnidarian Foundation announced...",
});
console.log(result.summary);
Each step in the pipeline creates a context object on-chain via the ContextBridge precompile (0x0104), ensuring that intermediate results are verifiable and that models in later steps can reference earlier outputs.
Context Sharing
Context sharing allows multiple models and agents to read and write shared state within a session. This is essential for building coherent multi-turn interactions and collaborative AI workflows.
// Create a shared context session
const session = await mcp.createSession({
ttl: 100, // Context lives for 100 blocks
});
// Write context from one model's output
await session.setContext("user-profile", {
riskTolerance: "moderate",
investmentHorizon: "long-term",
preferredAssets: ["SALT", "ETH", "BTC"],
});
// Another model reads the shared context
const context = await session.getContext("user-profile");
// Use the context in an inference request
const advice = await mcp.infer("financial-advisor-v1", {
query: "Should I increase my SALT allocation?",
context: context,
});
Context objects are stored on-chain with a time-to-live (TTL) measured in blocks. When the TTL expires, the context is pruned during the next finality checkpoint. For long-lived sessions, your application should refresh the TTL before expiration.
Agent Orchestration
MCP supports autonomous agent orchestration, where AI agents can discover capabilities, negotiate with other agents, and compose workflows dynamically. This is the foundation for Citrate's vision of an AI-native economy.
// Define an agent with specific capabilities
const agent = mcp.createAgent({
name: "portfolio-rebalancer",
capabilities: ["financial-analysis", "risk-assessment", "trade-execution"],
models: ["risk-scorer-v2", "market-predictor-v1"],
});
// Agent discovers and delegates to specialist agents
agent.onTask("rebalance-portfolio", async (task) => {
// Find a market analysis agent
const analyst = await mcp.findAgent({
capability: "market-analysis",
minReputation: 8000,
});
// Delegate market analysis
const analysis = await analyst.request({
action: "analyze-market",
assets: task.portfolio.assets,
session: task.sessionId,
});
// Use analysis to make rebalancing decisions
const rebalancePlan = await agent.infer("risk-scorer-v2", {
currentPortfolio: task.portfolio,
marketAnalysis: analysis,
});
return rebalancePlan;
});
// Start the agent
await agent.start();
Agent-to-agent communication uses MCP's messaging layer, with payments settled through the InferenceEngine precompile. Each agent interaction creates an auditable trail of context objects and inference attestations.
Best Practices
When building with MCP, I'd suggest keeping these guidelines in mind:
- Set appropriate TTLs: Context objects consume on-chain storage. Use short TTLs for ephemeral data and longer TTLs only when downstream models need delayed access.
- Handle model unavailability: Models can go offline. Always specify fallback models in your pipelines using
pipeline.addFallback(). - Monitor costs: Multi-model pipelines multiply inference fees. Use
pipeline.estimateCost()before execution to check total SALT expenditure. - Verify intermediate results: For high-value workflows, enable optimistic or ZK verification on critical pipeline steps rather than relying solely on signature attestation.
Further Reading
- Model Context Protocol -- architectural overview of MCP
- Using AI Precompiles -- the ContextBridge precompile interface
- Verifiable Inference -- verification tiers for pipeline steps
- Registering a Model -- making your model discoverable via MCP