Skip to content
IRevisionSystem / What

Citrate Technical Paper

The First AI-Native BlockDAG: Complete Protocol Specification

Larry Klosowski · Cnidarian Foundation

Download PDFDownload DOCX
The complete protocol specification. GhostDAG consensus with k=18, BFT finality committee of 100 validators at 67% threshold, 10-block checkpoints, Lattice Virtual Machine with seven AI precompiles, MCP orchestration layer, and SALT tokenomics. This is the reference document. Every parameter used across all other papers traces back here.
IIIVVIVIIVIII

Abstract

We present the protocol specification for the Citrate Network, a BlockDAG system that extends GhostDAG consensus with AI-specific execution capabilities. The architecture comprises three layers: a GhostDAG consensus layer providing parallel block production with BFT finality checkpoints (k=18, committee of 100 validators, finality in approximately 12 seconds); a Lattice Virtual Machine (LVM) providing full EVM bytecode compatibility augmented with five AI-specific precompiled contracts at addresses 0x1000-0x1004; and a Model Context Protocol (MCP) layer providing standardized REST endpoints for model discovery, inference, and orchestration. This paper documents the implemented components of the system...the consensus engine, the virtual machine with tested AI precompiles, dual ECDSA/Ed25519 cryptographic identity, and SALT tokenomics...and distinguishes them from components that remain in design or research phases. The consensus layer is architecturally designed to support future extensions including federated learning integration, described in the companion Paper II. We report testnet measurements where available and flag planned features explicitly. The native token SALT (1 billion supply) functions as gas, staking collateral, and governance weight.

Keywords: BlockDAG, GhostDAG, BFT finality, EVM compatibility, AI precompiles, Model Context Protocol, verifiable inference, Citrate Network

1. Introduction

Citrate is a three-layer blockchain network designed for AI-native applications. The project’s thesis is that a general-purpose execution environment with on-chain AI primitives...model registration, verifiable inference, tensor operations, adapter management, and model orchestration...can serve as infrastructure for decentralized AI systems in a way that purpose-built AI coordination networks (which lack general programmability) and general-purpose blockchains (which lack AI-specific optimizations) cannot.

The design draws on three established technologies. First, GhostDAG consensus, originated by Sompolinsky and Zohar [1, 2] and first implemented in production by the Kaspa network, provides a BlockDAG structure enabling parallel block production with deterministic ordering. Second, the Ethereum Virtual Machine, the most widely deployed smart contract runtime, provides developer tooling, auditing infrastructure, and application compatibility. Third, the Model Context Protocol, originated by Anthropic [23], provides standardized interfaces for model-to-model and model-to-tool communication.

This paper is organized as a protocol specification. Section 2 describes the consensus layer. Section 3 describes the Lattice Virtual Machine and its AI precompiles. Section 4 describes the MCP orchestration layer. Section 5 describes SALT tokenomics. Section 6 addresses security. Section 7 positions Citrate relative to comparable systems. Section 8 discusses open problems and future work.

Implementation status. Throughout this paper, we use the following conventions to signal implementation maturity. Claims marked [Implemented] describe components that have been built and tested on our development testnet. Claims marked [Specified] describe components that have been designed in detail but not yet implemented. Claims marked [Planned] describe features in the design phase. We adopt this convention to ensure that readers can distinguish between what exists and what is proposed.

2. Consensus Layer: GhostDAG with BFT Finality

2.1 GhostDAG Overview

[Implemented] Citrate’s consensus layer implements the GhostDAG protocol [1, 2], which generalizes Nakamoto consensus to directed acyclic graphs. Blocks reference multiple parents (up to 10 in Citrate: 1 selected parent + 9 merge parents), forming a DAG rather than a linear chain. The protocol partitions blocks into a blue set (honest, consistent with the k-cluster rule where anticone size ≤ k) and a red set (potentially adversarial). A selected-parent chain through the blue set provides deterministic total ordering for smart contract execution. Blue scores...the count of blue blocks in a block’s ancestry...serve as the primary weight for tip selection.

The k parameter is set to 18, calibrated for the network’s block time and expected propagation delay. For context, the Kaspa network’s Crescendo hardfork (May 2025) increased their k parameter to 124 when moving to 10 blocks per second with max parents of 16 [30]. Our more conservative k=18 reflects our choice of a slower block rate to accommodate the larger block sizes required by AI-augmented blocks containing embedding data (see Section 2.3).

2.2 Block Time and Rate

[Implemented] Citrate produces blocks at approximately 2 blocks per second (0.5-second block time). This is conservative relative to the current state of the art...Kaspa now operates at 10 BPS (100ms blocks) and has a roadmap targeting 32 BPS and ultimately 100 BPS [30]. Our choice of 0.5s is a deliberate design decision, not a limitation:

AI-augmented blocks are larger than standard transaction blocks. When the federated learning extensions described in Paper II are activated, each block will carry embedding vectors alongside transactions. A 768-dimensional float32 embedding adds approximately 3 KB per block. At 2 BPS, this is manageable within the 10 MB maximum block size. At higher block rates, embedding serialization overhead would require either compression (reducing fidelity) or elimination (removing the learning capability). The 0.5s block time provides headroom for this future extension while maintaining competitive throughput for pure transaction workloads.

We note that the block rate is a governance-adjustable parameter. If the federated learning extensions prove unnecessary or if embedding compression advances sufficiently, the block rate can be increased through a governance vote without architectural changes.

2.3 BFT Finality Checkpoints

[Implemented] Every 10 blocks (approximately 5 seconds at 2 BPS), the top 100 validators by stake form a finality committee. A checkpoint requires signatures from ≥67 validators (standard ⅔+ BFT threshold) and commits to a specific block hash, blue set, and state root. Once signed, the checkpoint’s ancestry is irreversible, providing probabilistic finality within approximately 12 seconds (accounting for network propagation and signature collection).

Safety holds if fewer than n/3 committee members are Byzantine, by standard BFT arguments [11]. Liveness holds if fewer than n/2 of validators are Byzantine, by GhostDAG’s blue set property [1, 2]. The finality mechanism is designed to be extensible: Paper II describes how checkpoint commitments can be extended to include learning state (meta-model routing weights, adapter registries) when the federated learning protocol is activated.

2.4 Testnet Performance

The following measurements were collected on a development testnet. We report them as preliminary data, not as production performance guarantees. Testnet configuration: [to be filled with actual testnet parameters...number of validators, geographic distribution, hardware specifications, and test methodology when measurements are collected]. Transaction throughput, finality latency, and block propagation times will be reported here with full methodology disclosure.

Note on throughput claims. At 2 BPS, maximum theoretical throughput depends on transactions per block. With Kaspa’s Crescendo achieving approximately 3,585-4,000 TPS at 10 BPS [30], a proportional estimate for 2 BPS would be approximately 700-800 TPS for standard transfers. Higher throughput is achievable by increasing block size, transaction batching, or block rate...but we decline to publish a specific TPS figure until testnet measurements with defined methodology are available. Claims without measurement are the primary failure mode this revision aims to correct.

3. Execution Layer: The Lattice Virtual Machine

3.1 EVM Compatibility

[Implemented] The Lattice Virtual Machine (LVM) executes 100% of EVM bytecode without modification. Any Solidity, Vyper, or Yul contract compiled for Ethereum can be deployed on Citrate without recompilation. This provides immediate access to the Ethereum ecosystem’s tooling: Hardhat, Foundry, OpenZeppelin libraries, Ethers.js, and existing audit infrastructure. The design follows precedent established by Avalanche C-Chain, Moonbeam, and Hedera, all of which have demonstrated that EVM compatibility can coexist with non-Ethereum consensus mechanisms.

Citrate maintains a dual cryptographic identity for each account: ECDSA over secp256k1 for Ethereum wallet compatibility and Ed25519 for native operations. This approach follows Hedera’s production-proven pattern of supporting multiple signature schemes, allowing users to interact with Citrate using existing Ethereum wallets (MetaMask, WalletConnect) while enabling the performance advantages of Ed25519 for native transactions.

3.2 AI Precompiled Contracts

[Implemented] The LVM extends the EVM with five AI-specific precompiled contracts at reserved addresses 0x1000-0x1004. Precompiled contracts execute native code rather than EVM bytecode, enabling operations that would be prohibitively expensive as Solidity implementations. This pattern follows Ethereum’s own precompiles (ecrecover at 0x01, SHA-256 at 0x02, etc.) and is well-precedented in EVM-compatible chains.

Table 1. AI Precompiled Contracts

Address

Name

Function

Status

0x1000

ModelRegistry

On-chain model registration: model hash, IPFS CID, architecture metadata, version tracking

Implemented

0x1001

InferenceOracle

Verifiable inference requests with three verification tiers (signature, optimistic, ZK)

Implemented

0x1002

TensorOps

Native tensor operations: matrix multiply, activation functions, normalization

Implemented

0x1003

LoRAFactory

LoRA adapter lifecycle: creation, registration, assignment, retirement

Specified

0x1004

MCPRouter

Model Context Protocol routing: discovery, capability matching, load balancing

Specified

Gas costs. Precompile gas costs are calibrated to reflect actual computational cost. [To be populated with measured gas costs from testnet execution. For each precompile, we will report: gas cost per operation, comparison to equivalent Solidity implementation cost, and comparison to off-chain computation cost. These measurements require the gas metering infrastructure to be finalized.]

3.3 Verifiable Inference

[Implemented: signature and optimistic tiers. Specified: ZK tier.] The InferenceOracle precompile supports three verification tiers, providing a cost-security tradeoff:

Signature-based verification (~21,000 gas). The inference provider signs the output with their staked identity. Verification relies on the provider’s economic stake and reputation. This is the lowest-cost option, suitable for low-value queries where the provider’s stake exceeds the potential gain from cheating.

Optimistic verification (~50,000 gas + fraud proof window). The inference result is posted with a 100-block challenge window (~50 seconds at 2 BPS). Any network participant can challenge by re-executing the inference and submitting a fraud proof. If the challenge succeeds, the original provider’s stake is slashed at 50%. This provides strong security for moderate-value queries.

ZK-SNARK verification (50,000-200,000 gas). Cryptographic proof that the inference was computed correctly from the committed model and input. This provides the strongest guarantee but is currently practical only for small models. ZK proof generation for large models (7B+ parameters) remains impractical at current proof generation speeds; we monitor progress from RISC Zero, SP1, and similar projects. The ZK tier is specified but not yet implemented, pending advances in proof generation efficiency.

3.4 Model Registry

[Implemented] The ModelRegistry precompile at 0x1000 provides on-chain registration and discovery of AI models. Each registered model entry includes: a deterministic hash of the model weights, an IPFS Content Identifier (CID) for decentralized weight storage, architecture metadata (parameter count, layer configuration, supported input/output formats), version history linking successive model iterations, and the registrant’s staked identity. The registry serves as a verifiable source of truth for which models are available on the network, their capabilities, and their provenance.

4. Orchestration Layer: Model Context Protocol

[Specified] The third protocol layer implements the Model Context Protocol (MCP), originated by Anthropic [23], adapted for on-chain model orchestration. MCP provides standardized REST endpoints for model discovery, inference requests, capability negotiation, and tool integration. In Citrate’s implementation, MCP endpoints are routed through the MCPRouter precompile at 0x1004, enabling smart contracts to discover available models, negotiate capabilities, and route inference requests to appropriate providers.

The MCP layer is designed but not yet implemented. Its primary functions include: model capability advertisement (what inputs a model accepts, what outputs it produces, what tasks it performs well), inference request routing (matching a query to the most appropriate model based on capability, load, and cost), and a decentralized marketplace where model hosts set inference prices and consumers select providers. Implementation of the MCP layer depends on finalization of the model registry and inference oracle, which are prerequisites for meaningful orchestration.

5. SALT Tokenomics

[Specified] SALT is Citrate’s native token with a fixed supply of 1 billion tokens. SALT serves three functions: gas payment for transaction execution and precompile calls, staking collateral for validator participation and slashing, and governance weight for protocol parameter votes.

Table 2. SALT Token Distribution

Allocation

Percentage

Amount

Vesting

Mining rewards

50%

500,000,000

Emitted over network lifetime via block rewards

Ecosystem development

25%

250,000,000

Grants, partnerships, developer incentives

Treasury

10%

100,000,000

DAO-governed reserve

Team

15%

150,000,000

4-year vesting with 1-year cliff

The base block reward is 10 SALT per block. Validators that host registered AI models and serve verifiable inference receive additional rewards: an inference bonus (proportional to verified inference volume) and a storage bonus (proportional to model hosting capacity). These bonuses incentivize validators to contribute AI infrastructure beyond basic transaction validation.

Minimum stake for validator participation is 10,000 SALT. Slashing penalties are tiered: 50% stake slash for fraudulent inference (detected via optimistic or ZK fraud proofs), 25% for extended downtime (>24 hours of missed attestations), and 10% for equivocation (signing conflicting checkpoints). Slashed funds are distributed 50% to the challenger who detected the violation and 50% to the treasury.

6. Security Analysis

6.1 Consensus Security

The GhostDAG+BFT consensus inherits well-studied security properties. Safety (no conflicting finalized states) holds under the standard Byzantine assumption f < n/3, where f is the number of Byzantine committee members and n is the committee size [11]. With a 100-validator committee requiring 67 signatures, this tolerates up to 33 Byzantine validators. Liveness (honest transactions are eventually included) holds under GhostDAG’s properties when fewer than half of validators are Byzantine [1, 2].

The PoW component of GhostDAG provides Sybil resistance for block production, while the BFT finality layer provides fast, deterministic finality. This hybrid approach avoids the finality delays of pure PoW systems (Bitcoin: ~60 minutes for practical finality) while maintaining the permissionless participation that pure BFT systems sacrifice.

6.2 Inference Verification Security

The three-tier verification system provides defense in depth. The signature tier is vulnerable to a rational provider who determines that the gain from cheating exceeds their staked collateral; the economic security bound is explicit and auditable. The optimistic tier is vulnerable during the challenge window if no honest observer is online; this risk is mitigated by the economic incentive for challengers (50% of slashed stake). The ZK tier, when implemented, will provide cryptographic rather than economic security, but at significantly higher gas cost.

Open problem: ZK proof generation for large model inference remains impractical. A ZK proof for a single forward pass of a 7B parameter model would require proof generation times measured in hours with current technology. Until proof generation speeds improve by several orders of magnitude, the ZK tier is limited to small models (<100M parameters) and critical applications where the gas cost is justified.

6.3 Dual Cryptography

[Implemented] Each Citrate account maintains two key pairs: ECDSA over secp256k1 (for Ethereum ecosystem compatibility) and Ed25519 (for native efficiency). Ed25519 provides approximately 10× faster signature verification than ECDSA and deterministic signatures (eliminating the k-nonce vulnerability that has caused ECDSA key exposure in production systems). Hedera Hashgraph has operated a similar dual-cryptography design in production since 2019, demonstrating its viability at scale.

7. Comparative Position

Citrate occupies a design space at the intersection of three existing categories: DAG-based consensus networks, EVM-compatible execution environments, and decentralized AI infrastructure. No existing system occupies this exact intersection. We compare Citrate to representative systems in each category along the dimensions most relevant to each comparison, rather than forcing a single comparison table across incommensurable systems.

7.1 DAG Consensus Comparison: Kaspa

Kaspa is the production reference implementation of GhostDAG and the most directly comparable consensus-layer system. Following the Crescendo hardfork (May 2025), Kaspa operates at 10 BPS with k=124 and max parents=16, achieving a record 3,585 TPS on mainnet. Citrate operates at 2 BPS with k=18 and max parents=10, a more conservative configuration reflecting the additional per-block overhead of AI-augmented blocks. Kaspa does not currently support smart contracts or on-chain AI operations (though smart contract support via vProgs is in development). Citrate’s contribution relative to Kaspa is the addition of a general-purpose execution layer with AI-specific precompiles atop the same consensus family.

7.2 EVM Comparison: Ethereum L2s and EVM Chains

Citrate’s EVM compatibility places it in comparison with Ethereum Layer 2 solutions (Arbitrum, Optimism, zkSync) and alternative EVM chains (Avalanche C-Chain, Moonbeam, Hedera). These systems provide full EVM compatibility with varying consensus mechanisms and performance characteristics. Citrate’s differentiator is the AI precompile suite: no existing EVM-compatible chain provides native precompiled contracts for model registration, verifiable inference, or tensor operations. Applications requiring on-chain AI capabilities currently must implement these as expensive Solidity contracts or rely on off-chain oracles with trust assumptions.

7.3 Decentralized AI Comparison: Bittensor

Bittensor is the most prominent decentralized AI coordination network. It operates as a specialized system for incentivizing AI model quality through a mining mechanism where miners are rewarded based on the quality of their model outputs as judged by validators. Bittensor and Citrate serve fundamentally different purposes: Bittensor is an AI coordination marketplace; Citrate is a general-purpose blockchain with AI capabilities. Comparing them on TPS or finality time would be a category error...they optimize for different metrics.

The meaningful comparison is on AI integration approach. Bittensor’s subnet architecture allows specialized AI tasks to run in isolated subnets with task-specific validation. Citrate’s precompile approach embeds AI operations directly in the execution layer, making them composable with arbitrary smart contract logic. Neither approach dominates the other; they serve different use cases. Bittensor excels at pure AI task markets. Citrate targets applications requiring both general programmability and AI capabilities...for example, DeFi protocols that use on-chain inference for risk assessment, or DAOs that use model-based governance analysis.

8. Open Problems and Future Work

8.1 Federated Learning Integration

[Planned] The consensus layer is architecturally designed to support a federated learning protocol in which BFT finality checkpoints serve as synchronization points for distributed model training. Paper II describes the theoretical framework for this integration, including a paraconsistent aggregation function that preserves contradictory model outputs and a recursive mentor-mentee architecture for targeted model improvement. This integration requires extending the checkpoint commitment to include learning state (meta-model routing weights, adapter registries, per-node performance profiles) and implementing the meta-model training pipeline...components that are designed but not yet built. The primary open questions are: (a) what is the minimum viable meta-model architecture that provides useful routing at checkpoint-interval timescales? (b) how do LoRA adapter composition effects scale with the number of accumulated adapters? and (c) what convergence guarantees can be proven for the recursive learning loop under Byzantine conditions?

8.2 Throughput Scaling

Kaspa’s demonstrated trajectory from 1 BPS to 10 BPS (with 32 BPS and 100 BPS on the roadmap) suggests that GhostDAG-based systems can scale block rates significantly. Citrate’s 2 BPS is intentionally conservative to accommodate future AI-augmented blocks, but the block rate is a governance-adjustable parameter. If embedding compression techniques reduce per-block overhead sufficiently, increasing to 10+ BPS becomes feasible, proportionally increasing throughput. We view throughput scaling as an engineering optimization rather than an architectural limitation.

8.3 Hardware Optimization

Paper V describes ATIS (Analog Token Importance Scoring), a proposed analog middleware for energy-efficient transformer attention pruning using FPAA hardware. If realized, ATIS would provide hardware-level optimization for the TensorOps precompile at 0x1002, enabling more efficient on-chain tensor operations. This remains a research proposal requiring simulation, prototyping, and validation.

8.4 Bridge Infrastructure

Paper VI describes the Memetic Money Portal, an ERC-6551 bridge architecture that provides cross-chain connectivity between Citrate and Ethereum. The bridge design addresses the well-documented security challenges of cross-chain bridges (over $2.5 billion in bridge exploits as of 2024) through fragmented liquidity across NFT-owned vaults. This is a risk mitigation architecture, not a security solution...it bounds maximum per-exploit losses while the underlying attack vectors (smart contract bugs, oracle compromise, social engineering) must be addressed through separate mechanisms.

9. Conclusion

Citrate is a BlockDAG network that adds AI-native execution capabilities to GhostDAG consensus. The implemented system provides: parallel block production with BFT finality, full EVM compatibility, five AI-specific precompiled contracts, dual cryptographic identity, and SALT tokenomics with validator incentives aligned to AI infrastructure provision.

The system’s distinguishing contribution is not any individual component but their combination: a general-purpose programmable blockchain with native AI operations, built on a consensus architecture designed to support federated learning integration. Whether this combination produces emergent capabilities beyond what the individual components provide is an empirical question that the companion papers in this series address theoretically (Papers II-III), from a hardware perspective (Paper V), economically (Papers VI-VII), and through governance design (Paper VIII). The biological inspiration for the architecture is documented in Paper IX.

We have been deliberate about distinguishing implemented components from specified designs and planned features. We have declined to publish throughput figures without testnet measurement methodology. We have positioned Citrate relative to comparable systems along dimensions where comparison is meaningful rather than constructing favorable but misleading cross-category comparisons. These choices reflect our conviction that academic honesty is not a weakness but a prerequisite for credibility...and that credibility, not headline numbers, is what builds lasting systems.

References

[1] Sompolinsky, Y., & Zohar, A. (2015). Secure high-rate transaction processing in Bitcoin. Financial Cryptography and Data Security, 507-527.

[2] Sompolinsky, Y., & Zohar, A. (2018). PHANTOM and GHOSTDAG: A scalable generalization of Nakamoto consensus. IACR Cryptology ePrint Archive.

[3] Keidar, I., Kokoris-Kogias, E., Naor, O., & Spiegelman, A. (2021). All you need is DAG. Proceedings of the ACM PODC, 165-175.

[4] Danezis, G., Kokoris-Kogias, L., Sonnino, A., & Spiegelman, A. (2022). Narwhal and Tusk: A DAG-based mempool and efficient BFT consensus. Proceedings of EuroSys.

[5] Spiegelman, A., Giridharan, N., Sonnino, A., & Kokoris-Kogias, L. (2022). Bullshark: DAG BFT protocols made practical. Proceedings of CCS.

[6] McMahan, B., Moore, E., Ramage, D., Hampson, S., & Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. AISTATS, 1273-1282.

[7] Shazeer, N., et al. (2017). Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. ICLR 2017.

[8] Hu, E. J., et al. (2021). LoRA: Low-Rank Adaptation of Large Language Models. ICLR 2022.

[9] Vaswani, A., et al. (2017). Attention is all you need. NeurIPS 30.

[10] Buterin, V. (2014). Ethereum: A next-generation smart contract and decentralized application platform.

[11] Castro, M., & Liskov, B. (1999). Practical Byzantine fault tolerance. OSDI, 173-186.

[12] Ben-Sasson, E., et al. (2014). Succinct non-interactive zero knowledge for a von Neumann architecture. USENIX Security, 781-796.

[13] Rao, J., & Opentensor Foundation. (2021). Bittensor: A peer-to-peer intelligence market.

[14] Belnap, N. D. (1977). A useful four-valued logic. In: Dunn, J. M., Epstein, G. (eds) Modern Uses of Multiple-Valued Logic. Episteme, vol 2. Springer, Dordrecht.

[15] Kirkpatrick, J., et al. (2017). Overcoming catastrophic forgetting in neural networks. PNAS, 114(13), 3521-3526.

[16] Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system.

[17] Ilharco, G., et al. (2023). Editing models with task arithmetic. ICLR 2023.

[18] Yadav, P., et al. (2023). TIES-Merging: Resolving interference when merging models. NeurIPS 2023.

[19] Biderman, S., et al. (2024). LoRA learns less and forgets less. Transactions on Machine Learning Research.

[20] Weissbourd, B., et al. (2021). A genetically tractable jellyfish model for systems and evolutionary neuroscience. Cell, 184(24), 5854-5868.

[21] Pallasdies, F., et al. (2019). From single neurons to behavior in the jellyfish Aurelia aurita. eLife, 8, e50084.

[22] Klosowski, L. (2023). Mentor/Mentee Relativity: Organizational Learning in Mentorship-Driven Swarms. Cnidarian Foundation Working Paper.

[23] Anthropic. (2024). Model Context Protocol Specification. Anthropic Technical Report.

[24] Senge, P. M. (1990). The Fifth Discipline. Doubleday.

[25] Nonaka, I., & Takeuchi, H. (1995). The Knowledge-Creating Company. Oxford University Press.

[26] Argyris, C., & Schön, D. A. (1978). Organizational Learning. Addison-Wesley.

[27] Yin, M., et al. (2019). HotStuff: BFT consensus with linearity and responsiveness. PODC, 347-356.

[28] Priest, G. (2006). In Contradiction: A Study of the Transconsistent. Oxford University Press.

[29] Citrate Network. (2025). Citrate Technical White Paper, Version 1.0. Cnidarian Foundation.

[30] Kaspa Network. (2025). KIP-14: Crescendo Hardfork. Kaspa Improvement Proposal. Activated May 5, 2025.

Appendix A: Protocol Parameters

Table A1. Citrate Protocol Configuration

Parameter

Value

Rationale

Chain ID

1 (mainnet), 1337 (testnet)

Standard convention

Block time

~0.5 seconds (2 BPS)

Accommodates AI-augmented block overhead

Max block size

10 MB

Bounds embedding vector size per block

Max parents

10 (1 selected + 9 merge)

Balances DAG width and propagation

k parameter

18

Blue set anticone bound for 2 BPS

Checkpoint interval

10 blocks (~5s)

BFT finality synchronization point

Committee size

100 validators

BFT quorum with practical liveness

BFT threshold

67 signatures (≥67%)

Standard 2/3+ Byzantine threshold

Minimum stake

10,000 SALT

Validator entry cost

Block reward (base)

10 SALT

Supplemented by inference and storage bonuses

Slashing (fraud)

50% of stake

Economic deterrent for dishonest inference

Slashing (downtime)

25% of stake

After >24h missed attestations

Slashing (equivocation)

10% of stake

Signing conflicting checkpoints

SALT total supply

1,000,000,000

Fixed supply

Signature schemes

ECDSA (secp256k1) + Ed25519

Ethereum compat + native efficiency

Fraud proof window

100 blocks (~50s)

Optimistic verification challenge period

───

This paper is part of the Gradient Papers series published by the Cnidarian Foundation.

Correspondence: larry@cnidarianfoundation.org

Builds on

Referenced by