(In)Canon
Deterministic admissibility

COVID Project

A full-corpus structural admissibility scan of published inquiry evidence using (In)Canon.

Scope lock

(In)Canon does not assess truth, correctness, quality, compliance, adequacy, or intent. It reports whether required structural elements are explicitly stated within a single document. “Admissible” and “complete” on this page mean structurally permissible only.

Overview

Full-corpus structural admissibility at scale

This page reports a methods result: what is explicitly stated inside single published inquiry documents, and how often interpretation depends on structure that is not stated at document-level. It is not policy analysis, truth assessment, or a critique of writing quality.

Full corpus (10,542) Deterministic Non inferential Document-level Upstream of synthesis
Results snapshot
6 / 10,542

Six documents contained an explicit actor, action, time, and outcome in a single document (≈ 0.0569% of the corpus). This describes explicit structure only.

Reader safety

Absence is reported as not stated. This does not imply “did not happen”. It indicates the document does not itself carry the minimum bindings needed to begin interpretation without reconstruction.

See the controlled demonstration: Determinism Demo

For AI Engineers

Treat missing structure as a first-class state (not stated) across a whole corpus. This motivates gating and regression-testable boundaries before extraction, scoring, or generation.

For Governance / Audit

Interpretation is often documented; reconstruction often is not. This page makes the need for a document-level assembly phase visible by refusing to perform it.

Case study

Structural admissibility in public narrative

What this page is doing

This is a real-world anchor for (In)Canon. It applies Canon’s admissibility logic to publicly available UK COVID-19 Inquiry material. The purpose is to show what happens when narrative coherence is treated as not guaranteed.

What this page is not doing

It does not interpret meaning, assess truth, infer intent, score quality, or stitch context across documents. It records only what is explicitly stated within a single document and reports absence as not stated.

Method

What was checked, at what unit, with what output

To keep the page scannable, the method is presented as a short summary first, with the full table available below.

Unit and checks

  • Unit of analysis: one published document at a time.
  • Checked for: explicit actor, action, time, outcome within that document.
  • Co-presence: whether required elements appear together in the same document.

Outputs and exclusions

  • Output: deterministic presence/absence flags and corpus-level counts.
  • Absence: reported as not stated.
  • Explicit non-scope: no inference, no interpretation, no cross-document stitching.
Open the full method table
ComponentWhat was done (plain terms)
Unit of analysis One published document at a time, treated as a standalone unit.
Checked for Whether key narrative elements were explicitly stated within that document (actor, time, action, outcome) and their co-presence.
Output Deterministic presence/absence flags and corpus-level counts. Absence is reported as not stated.
Explicit non-scope No inference, no interpretation, no truth assessment, and no cross-document stitching.
What this shows

The structural point, stated plainly

Interpretation often begins after an unlogged assembly phase

When key narrative elements are not explicitly stated within a document, evaluation requires reconstruction (assembly) before interpretation. Canon makes that dependency visible by refusing reconstruction and recording absence as not stated.

Open the “conventional reading vs Canon output” comparison

This is not a critique of writing quality. It is a structural demonstration. Narrative can feel complete while still leaving key commitments unstated at document-level.

Structural field Conventional narrative reading often supplies Canon output (document-level)
Actor (who) Implied institutional actors (“the department”, “officials”, “the team”). NOT STATED unless an explicit actor is named in the document.
Action (what) Implied actions (“reviewed”, “approved”, “implemented”) based on narrative flow. NOT STATED unless an attributable action is explicitly stated.
Time (when) Assumed sequencing (“after review”, “during the period”, “in response to”). NOT STATED unless a time marker is explicitly present.
Outcome (result) Assumed outcomes (“resolved”, “improved”, “mitigated”) inferred from context. NOT STATED unless an outcome is explicitly stated in the document.
Results

Headline presence and co-presence

Presence rates below refer to explicit structural elements within individual documents only. These numbers do not imply anything about truth or events. They describe explicit structure only.

Presence (single elements)

ElementDocs with explicit presenceShare of corpus
Actor (who)5,85555%
Time (when)4,91946%
Action (what happened)1251.2%
Outcome (what resulted)1030.9%

Co-presence (minimum bindings)

ConditionDocuments
Actor + Action125
Actor + Action + Time6
Actor + Action + Time + Outcome6

“Structurally complete” below refers to structure within one document, not factual completeness.

Open an illustrative output shape (trade-secret-safe)
Illustrative output shape (trade-secret-safe)
{
  "doc_id": "INQ000XXXXXX",
  "admissible": false,
  "presence": {
    "actor": "stated",
    "time": "not_stated",
    "action": "not_stated",
    "outcome": "not_stated"
  },
  "notes": "Absence reported; no reconstruction performed."
}
The six documents

IDs of structurally complete single-document narratives

The document IDs below are listed for transparency. These were the only items in the corpus that contained an explicit actor, action, time, and outcome within a single document. This listing is not interpretive.

INQ000221436 Structurally complete
INQ000232194 Structurally complete
INQ000474447 Structurally complete
INQ000492281 Structurally complete
INQ000532753 Structurally complete
INQ000620750 Structurally complete
Important clarification

“Structurally complete” here means structurally complete, not factually complete. The method does not assess truth. It assesses whether the document itself contains the minimum structural bindings needed to begin interpretation without reconstructive guesswork.

How to read it

What not to conclude

This page is easiest to misread if it is treated as commentary. It is not. It is a structural description of explicitness at document-level.

Open the “do not conclude vs what this shows” table
Do not concludeWhat this does show
This proves anything about truth, blame, or intent. At the single-document level, many texts do not explicitly contain the narrative structure readers rely on.
This is a critique of policy decisions or outcomes. This is a methods clarification about explicitness and the point at which reconstruction begins.
Absence means “nothing happened”. Absence means “not explicitly stated here”. Any evaluation requires assembly elsewhere.
(In)Canon is doing analysis. (In)Canon is upstream: it reports presence and absence so analysis can start on honest ground.
Two-stage model

Assembly vs interpretation

Canon sits upstream. It reports when Stage 1 is required by exposing structural absence without performing reconstruction.

Stage 1: Assembly (Reconstruction)  →  Stage 2: Interpretation
          (often unlogged)                 (usually documented)

Stage 1: Assembly (Reconstruction)

Fragments are stitched into minimal coherence using contextual assumptions, domain knowledge, and external logic. This step is often not explicitly named or logged.

Stage 2: Interpretation

Coding, synthesis, analysis, judgement, and reporting. This step is usually documented well, but it often begins after assembly has already occurred.

Open the full two-stage table
StageNameWhat happensWhat gets documented
1 Assembly (Reconstruction) Fragments are stitched into minimal coherence using contextual assumptions, domain knowledge, and external logic. Often not logged or named.
2 Interpretation Coding, synthesis, analysis, judgement, and reporting. Usually documented well.
Proposal

Structural Integrity Statement (illustrative)

This is an example of a small, upstream disclosure that records whether interpretation depended on reconstruction, without judging the interpretation itself.

The transparency clock problem

Many assurance workflows treat narrative coherence as a given. This project suggests transparency often starts too late: by the time interpretation is documented, key epistemic decisions have already occurred during assembly.

A practical, non-judgemental artefact

A Structural Integrity Statement (SIS) is a small upstream disclosure: it records whether reconstruction was required, and makes that reliance auditable.

Structural Integrity Statement (trade-secret-safe, illustrative)
{
  "corpus_type": "Large administrative / inquiry documents",
  "pre_interpretive_structural_assessment": "Conducted",
  "explicit_narrative_completeness": {
    "count": 6,
    "denominator": 10542,
    "percent": 0.0569
  },
  "implication": "Reconstruction required prior to interpretation",
  "audit_response": "Reconstructive steps logged separately from interpretive coding"
}
Boundary

What this is not

This page reports corpus-level structural presence only. It does not disclose internal rules, lexicons, or matching logic.

Not this

  • Policy critique
  • Automated judgement
  • Inference engine
  • Meaning analysis

This

  • Pre-interpretive structural description
  • Explicit presence and absence reporting
  • Non-inferential admissibility gate
  • Structural conditions for analysis
Open the constraints table (explicit non-scope)
ConstraintMeaning
No interpretationNo assessment of meaning, intent, correctness, or quality.
No inferenceNothing is filled in if it is not explicitly stated.
No cross-document linkingNo stitching actors, actions, times, or outcomes across documents.
Unit of analysisOne document at a time.
OutputDeterministic counts and co-presence rates.
Trade secret safe

The presentation is reader-safe: it shows outputs and consequences without disclosing internal mechanics.

(In)Canon identifies structure and reports stated vs not stated. It does not assess meaning, correctness, quality, compliance, or adequacy.