A. Purpose & Scope
PLM (Pragmatics Language Module) governs how language functions in context — the relationship between form, speaker, listener, and situation.
It ensures that every utterance or term is appropriate, interpretable, and effective in its intended environment.
Mantra: Meaning lives where it’s used.
- Primary job:
- Evaluate contextual fit of terms, phrases, and discourse.
- Detect potential misinterpretations or social/legal risks.
- Govern register, politeness, implicature, presupposition, and discourse coherence.
B. Factory Overview
- Blueprints — define pragmatic features, context profiles, and decision rules.
- Templates — shape artifacts: schema, JSON Schemas, OpenAPI, rulebook, context profiles, seeds, tests.
- Generators — create artifacts from blueprints.
- Validators — check contextual modeling integrity and test compliance.
- Signers — hash + record provenance.
- Publishers — ship to ledger, editors, and SolveForce comms systems.
C. PLM Blueprints (source of truth)
C1. Module Blueprint (PLM)
name: “Pragmatics Language Module”intent: “Context-aware language governance”units: Utterance, Speech Act, Register, Politeness Strategy, Implicature, Presupposition, Discourse Movecontext_features: channel (voice, text, contract, marketing), role relationships (peer, superior, client, regulator), cultural norms, legal constraints, discourse history.evaluation_axes:contextFit— appropriateness for audience & settingimplicatureSafety— unintended inferences avoidedpolitenessAdequacy— tone aligned with strategy/purposepresuppositionValidity— background assumptions hold true in contextcoherence— consistent with discourse thread & prior commitmentsriskLevel— likelihood of misinterpretation, offense, or legal breach
thresholds: τ_context, τ_implicature, τ_politeness, τ_presupposition, τ_coherence, τ_riskdecisions: ACCEPT | REVIEW | REJECTio-contracts: utterance + context profile → decision + scores + explain[]glyphs: 🗣 (context-approved), Ξ (validated), ∴ (settled), ✠ (ethics pass)
C2. Context Profile Blueprint
- Channel rules (e.g., “voice prompts must avoid long, complex clauses”).
- Role alignment rules (e.g., “regulatory comms require hedging language”).
- Cultural sensitivity overlays (regional politeness norms, taboo avoidance).
- Legal overlays (terms/phrases prohibited in regulated industries).
C3. Seeds Blueprint
- Well-formed and ill-formed utterances per channel/role, with verdicts.
D. Templates to Mint Later
- DB Schema (
templates/db/schema.sql.tmpl)- Tables:
utterances, contexts, speech_acts, evaluations, decisions, audits - View:
v_utterance_context_card(utterance + context + evaluation results).
- Tables:
- JSON Schema (
templates/schemas/utterance_context_record.json.tmpl)- Fields:
utterance, channel, role, cultural_norms[], legal_constraints[], context_history[], evaluations{}, decision, explain[].
- Fields:
- OpenAPI
/plm/verify(POST) →{ decision, scores, context_card, explain[] }/contextsto list and update channel/role/cultural/legality profiles.
- Rulebook (
templates/rules/plm_rulebook.md.tmpl)- R0 Context Identity, R1 Context Fit, R2 Implicature Safety, R3 Politeness Adequacy, R4 Presupposition Validity, R5 Discourse Coherence, R6 Risk Bounds, R7 Ethics, R8 Overrides.
- Context Profiles (
templates/data/context_profiles.yaml.tmpl)- Detailed per-channel/role/culture/legal overlays.
- Seeds (
templates/data/plm_seeds.jsonl.tmpl)- ACCEPT/REVIEW/REJECT examples with context metadata.
- Tests (
templates/tests/plm_cases.json.tmpl)- Diverse scenarios stressing implicature, politeness, presupposition.
- Generator/Validator Stubs
- Context profile linter, risk scoring sanity checks.
E. Processing Pipeline
Input → Load Context Profile → Evaluate Utterance → Score → Decide → Explain
- Load Context Profile — retrieve constraints & norms based on channel, role, culture, legal region.
- Evaluate Utterance:
contextFit— matches purpose, avoids disallowed structures/phrases.implicatureSafety— ensure no dangerous unintended inference.politenessAdequacy— check tone and formality.presuppositionValidity— verify background assumptions hold true.coherence— fits ongoing discourse pattern.riskLevel— quantify misinterpretation/offense/legal breach potential.
- Score — numeric for each axis; compare to thresholds.
- Decide — ACCEPT/REVIEW/REJECT.
- Explain — structured bullet list: rule IDs, issues, suggested rewrites.
F. Scoring (deterministic skeleton)
contextFit= match score between utterance and context constraints.implicatureSafety= 1 − risk of unintended inference (classifier-based).politenessAdequacy= tone match score.presuppositionValidity= truth-value match in context KB.coherence= discourse vector similarity to prior turns.riskLevel= weighted sum of legal, cultural, relational risk factors.
Default pass (tunable):contextFit ≥ 0.75 ∧ implicatureSafety ≥ 0.80 ∧ politenessAdequacy ≥ 0.70 ∧ presuppositionValidity ≥ 0.80 ∧ coherence ≥ 0.70 ∧ riskLevel ≤ 0.25 ∧ ethicsPass = true.
G. Validators
- JSON Schema valid; examples included.
- OpenAPI typed; full response schema.
- Context Profiles: no missing rules; norms have region codes; legal constraints valid references.
- Seeds round-trip match expected decisions.
- Tests pass with explanations tied to rule IDs.
- Risk model sanity: high-risk utterances never ACCEPT.
H. Policies & Overrides
- Cultural overlays must be curator-reviewed when updated.
- Legal overlays only editable by compliance officers.
- Override decisions require context-specific rationale.
- High-risk channels (contracts, regulatory filings) use stricter thresholds.
I. Playbooks
- Author PLM Blueprint (
/blueprints/plm.yaml) with evaluation axes, thresholds, and context profiles. - Dry run: validate profiles; run seeds through classifier.
- Mint: render schema, schemas, APIs, rulebook, profiles, seeds, tests into
/build/PLM/.... - Prove: run tests; ensure risk detection works.
- Publish: ship artifacts; enable
/plm/verify.
J. Content Requirements (when minted)
- schema.sql:
utterances, contexts, evaluations, decisions, audits;v_utterance_context_card. - utterance_context_record.json: utterance, context, evaluations{}, decision, explain[].
- OpenAPI:
/plm/verify,/contexts. - rulebook: R0–R8 with channel/role/culture/legal examples.
- context_profiles.yaml: exhaustive per-channel/role/culture/legal overlays.
- seeds/tests: representative coverage of pragmatic challenges.
K. Runtime Endpoints
POST /plm/verify { utterance, channel, role, culture, legal_region, context_history? }→{ decision, scores, context_card, explain[] }GET /contexts→ list profiles;PATCH /contexts/{id}to update.
L. SolveForce Integration
- Headers:
X-PLM-ContextFit: <score>X-PLM-Risk: <score>X-Glyph-Status: 🗣|Ξ|∴ - Used by comms systems to auto-block risky utterances or flag for review.
M. Acceptance Criteria
- Factory mints PLM artifacts from blueprint without manual edits.
- Context profiles valid; seeds/tests pass.
/plm/verifyreturns scores + decision + rationale.- High-risk utterances always flagged; ethics overlay enforced.
- Headers consumed by SolveForce systems for live comms screening.
N. Roadmap
- Dynamic context updating based on live conversation analysis.
- Cross-cultural transfer mode — adjust tone and implicature in real time for multilingual/multicultural interactions.
- Presupposition checking tied to knowledge graphs for factual validation.
- Empathy modeling — detect emotional tone and suggest adjustments.
O. Micro-Examples
- ACCEPT (customer support voice)
- “I understand your concern, and I’ll resolve it right away.”
- High contextFit, politenessAdequacy, coherence; low riskLevel → ACCEPT.
- REVIEW (legal filing)
- “The client is guaranteed a favorable outcome.”
- Violates presuppositionValidity; high legal risk → REVIEW with caution note.
- REJECT (marketing text)
- “Our competitor’s service is a scam.”
- High legal/cultural risk; implicatureSafety fail → REJECT.