Making hidden dependencies visible in formal systems
(In)Canon began as a structural admissibility method for narrative and governance documents: it records explicit commitments, records explicit absences, and refuses to fill gaps.
Here we apply the same discipline to non-narrative artefacts — specifically OpenAPI specifications — to show that “machine-readable” does not mean “machine-complete”, and that different systems place authority in different layers.
This is not an API quality judgement. It is a structural diagnostic that shows where a contract ends and where external authority must begin.
We built a Canon-adjacent pipeline that ingests an OpenAPI file and emits a commitment index: a traceable inventory of what the spec explicitly declares and what it explicitly does not declare under a fixed inspection frame.
Concretely, the inspection frame records structural facts such as: components present/absent, paths/operations present, request bodies declared/absent, parameters declared, and whether “requiredness” is explicit or not.
The output is then rendered into a human-readable extract and summarised into a profile: counts, dominant absence tags, and hotspot scopes where non-declarations cluster.
From the perspective of a system owner, a specification can appear “complete” because the API works in production and internal teams know how to use it.
Problems arise when the same specification is consumed by third parties, automated tooling, or autonomous agents that do not share that implicit knowledge.
In these contexts, structural silence is not neutral. It is routinely filled by assumptions — sometimes incorrectly.
- Integration bugs occur when required parameters, request bodies, or constraints are assumed but not formally declared.
- Security vulnerabilities emerge when authentication, authorization, or validation rules are enforced by convention or runtime rather than by an explicit contract.
- Agent misbehaviour occurs when automated systems infer obligations that were never stated.
Structural diagnostics do not claim these systems are broken. They identify where responsibility shifts from the specification to documentation, convention, or code — so that this shift can be made explicit, audited, or guarded.
Each run produces three artefacts:
- Commitment Index (JSON): traceable pointers + tags (machine-readable structural evidence).
- Readable Extract (TXT): a compact inventory + absence register (human-readable evidence).
- Profile Summary (JSON): counts, top absence tags, and hotspots (comparative analytics).
These are designed to be shareable without exposing internal mechanics: they are outputs, not a disclosure of the method’s implementation.
Although the analysis itself is non-interpretive, its outputs are intentionally machine-readable.
The Commitment Index is emitted as structured JSON, with stable pointers, tags, and scopes. This allows it to be consumed programmatically rather than read manually.
In practice, this means the outputs can be integrated into tooling such as:
- CI/CD pipelines that fail builds when critical commitments are absent
- Pre-flight checks for third-party API consumption
- Agent guardrails that refuse execution when authority is insufficiently explicit
- Governance or assurance workflows that require auditable boundaries
Importantly, the method does not decide what is acceptable. It supplies a deterministic structural signal that downstream systems can act on according to local policy.
As teams rush to deploy autonomous AI agents, a practical problem emerges: agents do not pause when a specification is incomplete. They proceed. When an interface representation is structurally silent, an agent (or any automated consumer) may fill that silence with assumptions based on prior examples, training data, or heuristics.
Structural diagnostics act as a pre-flight check: a deterministic upstream signal that tells you whether an agent can safely treat a representation as authoritative for execution, or whether additional authority must be supplied.
The Commitment Index can be used as an input to agent guardrails or middleware gates. For example:
This is not a judgement about the API. It is a decision about whether autonomous execution is permitted under a defined minimum structural bar.
The key point: the method does not guess what is missing. It makes the absence explicit so your system can handle it explicitly.
Structural diagnostics become commercially relevant when organisations treat representations (specifications, contracts, regulatory texts) as inputs to operational systems. When a representation is not structurally self-contained, someone must supply the missing authority — usually through manual review, institutional knowledge, or runtime controls.
1) Integration assurance (“integration insurance”)
When an organisation integrates a third-party API, it takes on third-party execution risk: not because the vendor is “bad”, but because the interface may rely on conventions and unstated requirements that are discovered only during integration.
A Canon-adjacent API Health Audit provides a fast structural profile: hotspots where non-declarations cluster, and the dominant categories of silence. This reduces manual time spent hunting for “hidden requirements” across docs, examples, and tribal knowledge.
2) CI/CD enforcement for minimum structural bars
Because the outputs are emitted as stable, machine-readable JSON (pointers + tags), teams can enforce local policies in CI/CD: fail builds, raise gates, or require explicit declarations for selected commitment types before a spec is shipped, published, or used for automation.
3) Benchmarking (“Stripe vs Kubernetes” as a measurable contrast)
Comparative profiling makes structural differences measurable across systems. In practice, this supports benchmarking: not as a moral ranking, but as an estimate of interpretation debt — the human and institutional effort required to supply unstated authority during integration and automation.
| Use case | Who uses it | Why it matters |
|---|---|---|
| Agent guardrails | AI engineering teams | Prevent autonomous execution from silently importing assumptions. |
| API due diligence | Platform, procurement, risk | Quantify structural hotspots before committing to an integration. |
| CI/CD compliance | DevOps / platform | Enforce a minimum structural bar for specs used by tooling or agents. |
This is not a toy demonstration. It is an operational assurance pattern: a structural “x-ray” for formal systems that reveals whether automation is relying on explicit authority or on unstated work done elsewhere.
We ran the same inspection frame against three widely used systems: Stripe (spec-centred), GitHub (convention-centred), and Kubernetes (runtime-centred).
| System | Structural silence (Denied %) | Dominant absence signature | Primary authority layer |
|---|---|---|---|
| Stripe API | ~21% | Requiredness and request bodies are frequently explicit | Contract (specification) |
| GitHub REST API | ~31% | Requiredness frequently not declared (convention and docs carry the load) | Convention + documentation |
| Kubernetes OpenAPI | ~38% | Requiredness and request bodies frequently not declared (runtime carries the load) | Runtime code + controllers |
These figures are not a score. They are a structural property under an explicit inspection frame: how often the representation declines to state a particular kind of commitment.
“Machine-generated” does not guarantee “spec-complete”. Kubernetes is highly formal at the type level, but often delegates obligations to runtime enforcement.
Identifies where an agent would need to import assumptions to execute an interface safely, and where an upstream gate should refuse or request additional authority.
Makes responsibility boundaries explicit before audit, risk review, or automation. This is especially valuable when downstream systems are probabilistic.
Helps teams understand whether obligations live in the contract, in convention, or in runtime controls, and what that implies for maintenance, tooling, and integration.
Structural diagnostics do not assess correctness, security posture, quality, adequacy, or intent. They do not infer missing meaning or propose improvements.
They operate upstream: they make explicit where interpretation begins, so that downstream evaluation can be appropriately bounded and accountable.
(In)Canon reports what is stated and what is not stated — and refuses to do the unstated work on your behalf.
Because the outputs are structured and deterministic, structural diagnostics can be enforced automatically rather than relied on socially.