Paraconsistent Consensus
At every BFT checkpoint (every 10 blocks, roughly every 5 seconds), the network runs the learning protocol. Validator nodes are ranked by blue score: a composite of inference accuracy, latency, and cooperation history. The top quartile becomes temporary mentors for the current epoch.
Mentors generate compressed LoRA adapter diffs and broadcast them to their assigned mentees. Mentees apply the updates locally. The entire process happens within the checkpoint window. By the time the next block is produced, every node in the network has access to the best knowledge the network had at the prior checkpoint.
Paraconsistent logic (Belnap FOUR) handles the case where a node's outputs contradict the network consensus. Instead of immediate slashing, contradictory nodes enter a quarantine state. Their outputs are analyzed. If the contradiction is adversarial, slashing is triggered. If it is informative... meaning the node found something the rest of the network missed... the node is credited and the network updates.
Why Paraconsistent Logic?
Classical consensus protocols assume binary truth: a transaction is valid or invalid, a block is honest or malicious. But AI-native workloads produce inherently uncertain outputs. Two models might return different inference results for the same input, and neither is necessarily "wrong."
Paraconsistent logic is a family of formal logics that tolerate contradictions without collapsing into triviality. In standard logic, a single contradiction allows you to derive any conclusion (the principle of explosion). Paraconsistent systems isolate contradictions, allowing the network to reason about conflicting information without discarding it.
In Citrate, this means:
- Contradictory inference results from different nodes are retained and weighted, not discarded.
- The network builds a richer model of uncertainty over time.
- Finality checkpoints aggregate these signals into improved collective accuracy.
The Learning Loop
At each finality checkpoint (approximately every 12 seconds), the network executes a cooperative learning cycle:
- Signal Collection -- Nodes broadcast their local inference results and confidence scores as mentorship signals.
- Contradiction Detection -- The checkpoint aggregator identifies conflicting results and maps them to the paraconsistent lattice.
- Recursive Refinement -- A meta-model processes the contradictions, weighting each node's contribution by its historical accuracy (tracked via blue score).
- Adapter Aggregation -- LoRA adapter updates from participating nodes are merged using federated averaging, producing a shared model improvement.
- State Commitment -- The refined meta-model parameters and aggregated adapters are committed to chain state.
Checkpoint N: Collect signals --> Detect contradictions --> Refine meta-model --> Aggregate adapters --> Commit
| |
+-------- learning from checkpoint N feeds into checkpoint N+1 -----------------------+
Mentorship Signals
Nodes in Citrate don't just validate transactions -- they teach each other. When a node processes an inference request, it broadcasts a mentorship signal containing:
- The inference result and confidence interval
- The model and adapter version used
- A gradient summary (compressed update direction)
Other nodes incorporate these signals into their local learning, weighted by the sender's reputation. This creates a recursive cooperative learning dynamic where the network's collective intelligence improves with every finality round.
Handling Contradictions
When two nodes produce contradictory results, Paraconsensus does not force a majority vote. Instead, it:
- Records both results with their confidence bounds.
- Assigns each result a truth value on a four-valued lattice: True, False, Both, Neither.
- Uses the "Both" value to flag genuine uncertainty that requires more data.
- Feeds the contradiction back into the next learning cycle as a training signal.
This approach means the network gets smarter precisely where it is most uncertain -- contradictions are not bugs, they are learning opportunities. We believe this is fundamentally different from how other chains handle disagreement, and it is one of the design choices we are most proud of.
Relationship to GhostDAG
Paraconsensus operates as a layer above GhostDAG's block ordering. GhostDAG provides the base-layer DAG structure, block classification (blue/red), and transaction ordering. Paraconsensus adds the cooperative learning and meta-model refinement that make Citrate an AI-native chain rather than a general-purpose blockchain with AI bolted on.
Further Reading
- Finality Checkpoints -- the mechanism that triggers each learning cycle
- LoRA Adapters -- how lightweight model updates flow through the network
- Blue Score -- the reputation metric that weights mentorship signals