(In)Canon
Deterministic admissibility
Orientation

We routinely rely on authoritative texts as if they explicitly bind responsibility, action, time, and outcome. Many do not.

When structure is missing, humans and systems silently repair it through inference. (In)Canon exists to make that repair requirement explicit — without interpretation, judgement, or correction.

This work is intended for people working in governance, regulation, public inquiry, audit, or AI assurance—anywhere responsibility, authority, or action must be explicit rather than assumed.

Deterministic admissibility

(In)Canon: a structural gate before interpretation

(In)Canon is a deterministic, non-inferential admissibility layer that checks whether an input can be evaluated without silently inserting missing structure. It records explicit commitments and reports structural absence. It does not interpret meaning, generate content, or score quality.

This video introduces the conceptual challenge that (In)Canon addresses: how structural conditions affect whether information can be evaluated without inference. It does not make claims about correctness, quality, or performance of AI systems, nor does it evaluate existing technologies.

Scope lock

(In)Canon does not establish truth, correctness, quality, compliance, or adequacy. It reports only whether information required for downstream judgement is explicitly present or not stated.

Input representation (text / structured / other)
            │
            ▼
     (In)Canon admissibility
   (explicit commitments + absence)
            │
            ├── ADMISSIBLE  → evaluation / scoring / generation permitted
            │               (structurally permissible only; not correctness)
            │
            └── NOT ADMISSIBLE → missing structure must be stated explicitly
Positioning note

(In)Canon operates upstream of interpretation.

It does not analyse, score, explain, or judge content. It determines whether interpretation is structurally permissible by checking whether required elements are explicitly present.

If a system reasons, evaluates, or generates conclusions, (In)Canon sits before that step.

Implementation status

(In)Canon is operational.

It is implemented as an internal analytical system with a programmatic API. Public access is intentionally restricted while the methodology and boundary conditions are finalised.

This site documents the behaviour and guarantees of the system, rather than providing a public product interface.

Orientation

What this is: a precondition check on whether evaluation is structurally permissible.

What it returns: explicit commitments (verbatim anchors) + stated / not stated presence.

What it prevents: “reasonable” reconstruction from becoming indistinguishable from evidence.

Deterministic Non-inferential Auditable outputs Upstream of LLMs Abstraction agnostic
Why this exists

Inputs often omit one or more of the structural elements required for evaluation. Humans repair this automatically. LLMs often repair it automatically. Neither repair step is reliably logged. (In)Canon makes the repair requirement explicit.

Actor Action Time Outcome
Example (illustrative)

Input:
  "Procedures were followed."

Canon presence:
  Actor   : not stated
  Action  : stated
  Time    : not stated
  Outcome : not stated

Result:
  Not admissible for evaluation without inference (missing prerequisites)

When these are missing, evaluation becomes interpretive rather than evidential.

Problem

Evaluation fails when structure is missing and inference fills the gap.

Many workflows treat narrative completeness as a given. In practice, key elements are often absent (who did what, when, under what scope, linked to what outcome). Readers then silently reconstruct coherence. This introduces inconsistency, reduces reproducibility, and creates evidentiary risk.

(In)Canon exists to make absence visible and to prevent inference from being smuggled in as evidence.

What (In)Canon does

1) Latch explicit commitments (verbatim anchors)
2) Report absence (stated vs not stated for required structure)
3) Gate evaluation (permit / block downstream interpretation)

1) Records explicit commitments

  • Captures only what is stated. No addition. No enrichment.
  • Identifies claim-bearing spans (verbatim anchors)
  • Preserves offsets for auditability
  • Produces repeatable commitment lists

2) Reports structural absence

  • Flags missing prerequisites required for evaluation without inference
  • Actor, action, time, outcome, mitigation (configurable schema)
  • Binary stated vs not stated
  • No judgement about importance or sufficiency

3) Acts as an admissibility gate

(In)Canon sits before scoring, analysis, or decision-making. It is a precondition check: can this input be evaluated without inserting assumptions? “Admissible” means structurally permissible only.

For LLM teams

A reliability layer that makes pipelines enforceable and testable.
Without (In)Canon:
  Input → model fills gaps → structured output appears complete → omissions become hidden assumptions

With (In)Canon:
  Input → admissibility check → ADMISSIBLE / NOT ADMISSIBLE → only then generation or evaluation (with absences explicit)

Stops structural hallucination

  • Surfaces “not stated” before a model can fill missing actors, timelines, or causality.
  • Prevents schema-shaped fabrication (fields get filled because the form expects them).

Makes behaviour testable

  • Deterministic outputs enable regression tests, golden fixtures, and clean diffs.
  • Stability across model/provider changes (vendor-agnostic boundary).

Typical insertion points

  • Before generation: hard gate on admissibility
  • Between retrieval and generation: constrain to retrieved evidence
  • Before structured extraction: prevent schema-filling fabrications

This is not a prompt technique. It is an enforceable precondition layer. Use it as a hard gate (“do not generate unless admissible”) or as constrained generation (“unsupported fields remain null and explicitly marked not stated”).

Outputs

Deterministic, auditable structure (illustrative)

Exact schemas and contracts are documented on the Specification page.

Output shape (illustrative)
  admissible: true/false
  presence:  { actor, action, time, outcome, mitigation ... }
  commitments: [ { rc_id, span, text, form, rule_fired, ... } ]
  notes: "No scoring. No interpretation. Presence and absence only."
Illustrative output
{
  "admissible": false,
  "reason": "Missing prerequisites for evaluation without inference",
  "presence": {
    "actor": "not_stated",
    "action": "stated",
    "time": "not_stated",
    "outcome": "not_stated",
    "mitigation": "not_stated"
  },
  "commitments": [
    {
      "rc_id": "RC-001",
      "span": [128, 162],
      "text": "procedures were followed",
      "form": "assertion",
      "rule_fired": "LEXICON_MATCH",
      "actor_present": false,
      "time_present": false
    }
  ],
  "notes": "No scoring. No interpretation. Presence and absence only."
}

(In)Canon does not decide whether a claim is true, adequate, or acceptable. It reports whether the structure required for evaluation is explicitly present.

Boundary

What (In)Canon does not do
  • No interpretation of meaning
  • No inference or filling gaps
  • No scoring, weighting, or quality judgement
  • No rewriting or coaching
  • No criteria mapping or decisioning
  • No hidden enrichment of the source text
Trade secret safe

This page describes what (In)Canon returns and where it sits in workflows. Internal rules, lexicons, and configuration details are intentionally not disclosed.

(In)Canon identifies structure and reports stated vs not stated. It does not assess meaning, correctness, quality, compliance, or adequacy.