Governance

Code of Ethics

We take ethics seriously -- not as an afterthought but as a core protocol concern. The Citrate Code of Ethics governs the responsible development, deployment, and operation of AI systems on the network. As an AI-native blockchain, Citrate recognizes that the intersection of artificial intelligence and decentralized finance creates unique ethical obligations. This code is enforced through the Ethics Board and backed by the BR1J Constitution.

Transparency Requirements

All AI models registered on the Citrate ModelRegistry must meet minimum transparency standards. Opacity in AI systems undermines the trust that decentralized networks depend on.

Model card disclosure: Every registered model must publish a model card that includes:

  • Training data sources and their provenance
  • Model architecture and parameter count
  • Known limitations and failure modes
  • Intended use cases and prohibited uses
  • Performance benchmarks on standard evaluation sets
  • Date of last training or fine-tuning
{
  "model_card": {
    "name": "sentiment-v1",
    "architecture": "transformer-encoder",
    "parameters": "125M",
    "training_data": [
      {
        "source": "Financial News Corpus v3",
        "license": "CC-BY-4.0",
        "size": "2.1M documents",
        "date_range": "2020-01 to 2024-12"
      }
    ],
    "known_limitations": [
      "May exhibit bias toward English-language sources",
      "Accuracy degrades on texts shorter than 20 words",
      "Not trained on cryptocurrency-specific terminology"
    ],
    "prohibited_uses": [
      "Automated trading decisions without human oversight",
      "Individual credit scoring",
      "Surveillance or tracking"
    ],
    "benchmarks": {
      "accuracy_imdb": 0.94,
      "accuracy_financial_phrasebank": 0.89,
      "f1_score": 0.91
    }
  }
}

Models that fail to provide adequate model cards are flagged by the Ethics Board and may be delisted from the ModelRegistry after a 30-day remediation period.

Bias Auditing

AI models on Citrate are subject to periodic bias audits. The Ethics Board maintains a set of fairness benchmarks and requires that high-impact models (those used in financial decisions, governance inputs, or access control) demonstrate acceptable performance across demographic groups.

Audit process:

  1. The Ethics Board selects models for audit based on usage volume and impact category
  2. The model is evaluated against the Citrate Fairness Benchmark Suite (CFBS)
  3. Results are published on-chain for community review
  4. Models that fail fairness thresholds receive a "bias warning" label
  5. Model operators have 60 days to remediate and request re-evaluation
  6. Persistent failures result in delisting

Fairness metrics evaluated:

MetricDescriptionThreshold
Demographic parityOutput distribution should not vary significantly by group<10% disparity
Equal opportunityTrue positive rate should be similar across groups<15% disparity
CalibrationConfidence scores should be equally accurate across groups<5% calibration gap
# Check audit status of a model
citrate-cli ethics audit-status --model-id 0xYOUR_MODEL_ID --rpc https://rpc.cnidarian.cloud
# Request a voluntary audit (improves model reputation)
citrate-cli ethics request-audit --model-id 0xYOUR_MODEL_ID --rpc https://rpc.cnidarian.cloud --private-key $PRIVATE_KEY

Model operators who proactively request audits and maintain clean audit records receive a "Certified Fair" badge that improves their routing preference in the InferenceEngine.

Human Oversight

Citrate requires human oversight for AI systems operating in high-stakes domains. The constitution defines three oversight levels:

Level 1 -- Human-in-the-loop: A human must approve every AI output before it takes effect. Required for governance proposal analysis, large treasury decisions, and constitutional interpretation.

Level 2 -- Human-on-the-loop: A human monitors AI outputs and can intervene but does not approve each individually. Required for bridge attestations, model registration review, and medium-value financial operations.

Level 3 -- Human-over-the-loop: Humans set policies and review aggregate performance but do not monitor individual outputs. Acceptable for routine inference serving, content generation, and low-value transactions.

Smart contracts that use AI inference can declare their oversight level:

contract GovernanceAdvisor {
    // Level 1: Human approval required
    uint8 public constant OVERSIGHT_LEVEL = 1;
 
    struct PendingAdvice {
        bytes32 requestId;
        bytes output;
        bool humanApproved;
    }
 
    mapping(bytes32 => PendingAdvice) public pendingAdvice;
 
    function onInferenceResult(bytes32 requestId, bytes calldata output) external {
        pendingAdvice[requestId] = PendingAdvice(requestId, output, false);
        // Result is stored but NOT acted upon until human approval
    }
 
    function approveAdvice(bytes32 requestId) external onlyAuthorizedHuman {
        pendingAdvice[requestId].humanApproved = true;
        // Now the advice can be used in governance decisions
    }
}

Responsible Disclosure

Security researchers who discover vulnerabilities in AI models or the inference infrastructure must follow the responsible disclosure process:

  1. Report: Submit the vulnerability through the secure disclosure portal at https://security.cnidarian.cloud or via encrypted email to the Security Working Group
  2. Triage: The Security Working Group acknowledges receipt within 24 hours and assesses severity within 72 hours
  3. Remediation: The model operator is notified and given a remediation timeline based on severity (Critical: 7 days, High: 14 days, Medium: 30 days, Low: 90 days)
  4. Disclosure: After remediation (or after the deadline if unresolved), the vulnerability is publicly disclosed
  5. Reward: Researchers receive a bug bounty from the Security Working Group fund

Bug bounty amounts by severity:

SeverityBounty Range
Critical (protocol-level, fund loss risk)10,000 - 100,000 SALT
High (model manipulation, attestation bypass)5,000 - 25,000 SALT
Medium (information disclosure, minor manipulation)1,000 - 5,000 SALT
Low (cosmetic, non-exploitable)100 - 1,000 SALT

Researchers who follow the responsible disclosure process are protected from any legal action by the Cnidarian Foundation or DAO. Researchers who exploit vulnerabilities for personal gain rather than disclosing them are subject to the same slashing and delisting penalties as malicious operators.

Further Reading