(Reason & Proof Governance)
A. Purpose
Bind language to truth, cause, and purpose. LM evaluates claims, assembles arguments, weighs evidence, and emits Proof Cards and Canons that the rest of the stack can trust.
Mantra: Say what is. Show why. Prove how you know.
B. Core Units
- Claim ← text + telos + scope
- Premise / Proposition
- Inference (deductive/inductive/abductive/causal/defeasible)
- Conclusion
- Evidence (empirical/logs/formal/textual/expert/simulated)
- Canon (stable doctrine; versioned)
- Counterclaim (steelman)
- Overlays: ✠ ethics, 🎯 telos, ⟶ causality, ⚠ risk
C. Scores & Gates
truthLikelihood,coherence,causalityStrength,evidentialWeight,teleologyFit,ethicalHarmony,rhetoricalIntegrity- Decisions: ACCEPT | PROVISIONAL | CONTEST | REJECT
- Default pass (tunable):
truth ≥0.70 ∧ coherence ≥0.85 ∧ causality ≥0.60 ∧ evidence ≥0.65 ∧ telos ≥0.75 ∧ ethics ≥0.80 ∧ rhetoric ≥0.80
D. Processing Contract (runtime behavior later)
Input → Parse → Ground → Infer → Weigh → Decide → Explain
- Parse claim & modality; segment predicates/quantifiers.
- Ground terms via SDM senses + GLM-cleaned forms + ELM lineage to avoid equivocation.
- Build argument graph; run entailment/contradiction; causal templates if applicable.
- Weigh evidence (quality × independence × recency).
- Decide with thresholds + overlays.
- Explain via Proof Card (premises, rules fired, evidence table, counterpoints, open questions).
E. Factory Outputs to Mint Later
- DB schema:
claims, premises, inferences, conclusions, evidence, counterclaims, canons, decisions, audits+ viewsv_argument_graph,v_proof_card,v_canon_index. - JSON Schemas:
claim_record.json,evidence_record.json,proof_card.json,canon_record.json. - OpenAPI:
POST /lm/verify→{ decision, scores, proof_card }GET /lm/proof/{id}→ argument graph & receiptsPOST /lm/canon(adopt/deprecate)POST /lm/dialectic→ structured counterarguments/steelman
- Rulebook: R0 Non-Contradiction · R1 Sufficiency · R2 Valid Inference · R3 Causal Discipline · R4 Telos Alignment · R5 Ethical Harmony · R6 Rhetorical Integrity · R7 Burden of Proof · R8 Contestation/Revision
- Inference Packs (
*.yaml): modus ponens, syllogism, Bayesian update, analogical, abductive, do-calculus templates, defeasible schemas. - Evidence Policy Pack: reliability priors, independence checks, tamper receipts.
- Seeds/Tests: SolveForce+AES scenarios (SLAs, energy efficiency claims, safety superlatives, domain-transfer assertions).
F. Interlocks (recursive fit)
- GLM: grapheme safety; prevents confusable/equivocal forms.
- MLM: term formation; LM blocks arguments that rely on malformed neologisms.
- SDM: sense issuance; LM rejects equivocation across senses.
- ILM: re-tests claims under target domain telos/ethics.
- PLM: screens context after truth/causality pass.
- ALM: uses LM as the proof gate before publication.
G. SolveForce Hooks
- Telecom/Energy claims must surface metrology and benchmarks; marketing text gated by LM decisions.
- Books (Logos Codex, Linomics, LANOMICS): chapters or theses become Canons with durable IDs; quotations export with embedded Proof Card fingerprints.
logM — Log Monitor (Logarithm of Meaning)
“If LM judges, logM remembers.”
logM is the audit/provenance twin: it captures receipts, hashes argument graphs, compresses evidence weight into interpretable metrics (your playful “log M”: a logarithmic compression of claim complexity/evidence mass), and publishes verifiable trails.
A. Purpose
- Provenance: cryptographic receipts for every LM decision.
- Compression:
logMscore—log-scaled measure of evidential depth and independence (easy to compare across claims). - Observability: deltas over time; drift detection; alerting when canons are threatened by new evidence.
B. Core Units
- Trace (who/what/when/how)
- Receipt (hashes, signatures, module headers)
- Telemetry (scores over time)
- Drift (evidence or contradiction signals)
- Attestation (human/expert cosigns)
C. Metrics (illustrative; tunable later)
logM_evidence = log(1 + Σ quality×independence×recency)logM_coherence = -log(1 + contradictions)logM_causal = log(1 + interventions × effect size)- These feed dashboards and alert thresholds; raw LM scores remain available.
D. Factory Outputs to Mint Later
- DB schema:
traces, receipts, telemetry, attestations, alerts. - JSON Schemas:
trace_receipt.json,attestation.json,telemetry_point.json. - OpenAPI:
POST /logm/ingest(from LM)GET /logm/trace/{id}GET /logm/series?claim_id=…POST /logm/attest(expert sign-offs)
- Rulebook: provenance guarantees, hashing, chain-of-custody, rotation.
- Seeds/Tests: receipt integrity, replay detection, drift alerts.
E. Headers (downstream)
X-LM-Decision, X-LM-Truth, X-LM-Causality, X-LM-Canon, plusX-logM-Evidence, X-logM-Coherence, X-logM-Trace (ids), X-Glyph-Status: Λ|⊢|∴|✠.
Implementation Playbook (no files yet; this is the build recipe)
- Author Blueprints
/blueprints/lm.yaml(scores, thresholds, rule packs, evidence policies)/blueprints/logm.yaml(provenance model, metrics, retention, alerting)
- Dry Run
- Validate rule/evidence packs; run seed claims; calibrate thresholds against SolveForce/AES cases.
- Mint (later)
- Generate DB schemas, JSON schemas, OpenAPI specs, rulebooks, packs, seeds, tests into
/build/LM/…and/build/logM/….
- Generate DB schemas, JSON schemas, OpenAPI specs, rulebooks, packs, seeds, tests into
- Prove
- Execute tests: ACCEPT/PROVISIONAL/CONTEST/REJECT paths; verify receipts & logM metrics.
- Publish
- Enable
/lm/*and/logm/*; wire to ledger; expose headers to downstream systems.
- Enable
- Operate
- ALM routes tasks through LM; logM ingests receipts; PLM finalizes utterance context; ILM maps cross-domain; GLM ensures graphemic safety; SDM guards sense; ELM anchors etymon.
Micro-Examples (calibration targets)
- SolveForce SLA claim
- “Backbone latency ≤ 7 ms p95 across region R.”
- LM: ACCEPT (evidence logs, causal controls); logM: high
logM_evidence, stable telemetry.
- AES efficiency claim
- “Adaptive control reduces fuel use by 3–5%.”
- LM: PROVISIONAL (needs counterfactual trials); logM: alerts when new trials land.
- Book doctrine
- “LANOMICS improves cross-domain adoption.”
- LM: PROVISIONAL → Canon Level 1 after replication; logM: emits series showing effect sizes over time.
Style Notes
- Recursive binding: Every Proof Card cites SDM senses, GLM forms, ELM roots; ILM remaps require re-verification; PLM vetoes delivery if context-risk rises; ALM records the entire chain.
- Differentiation by design: LM + logM keep neologisms honest and useful, preventing semantic drift while letting innovation breathe.
- Traditional spine, modern muscle: Aristotle’s logos with modern evidence calculus and cryptographic receipts.