← Back

Minimal Countermodel Catalog

What escapes each theorem—and what you sacrifice to escape.

This catalog lists the structural conditions under which a system can avoid the Desmocycle necessity requirements (closure, globality, self-indexing) while remaining coherent. Each escape route is legitimate but trades away something specific.


0. The Core Claim Being Escaped

The capstone theorem claims:

Bounded general competence under noveltyClosure ∧ Globality ∧ Self-Indexing

Each countermodel negates at least one premise while remaining internally consistent.


1. Escaping Closure (Evaluative Leverage)

Theorem escaped: Hot Zombie Failure / Closure-or-Collapse

Claim: Evaluation must causally steer control.

1.1 Unbounded Capacity (k ≥ n)

How it escapes: If the system can represent all potentially relevant degrees of freedom simultaneously, selection is unnecessary. No selection pressure means no need for evaluation to guide what gets selected.

What you sacrifice: Finite resources. Real systems (biological or engineered) have bounded memory, compute, and bandwidth. This escape requires infinite (or effectively infinite) capacity relative to task complexity.

Example systems:
- Lookup tables for small state spaces
- Exhaustive enumeration in toy domains
- Systems where task complexity is artificially bounded to fit capacity

Verdict: Theoretically valid; practically unavailable for general intelligence.


1.2 No Novelty (Fixed Relevance)

How it escapes: If the task-relevant coordinate never changes (λ = 0), a system can hardcode the correct allocation and never need error-driven reallocation.

What you sacrifice: Generality. The system works only in the specific environment it was designed for. Any distributional shift breaks it.

Example systems:
- Fixed industrial controllers
- Single-purpose classifiers in static domains
- Reflex arcs for invariant stimuli

Verdict: Valid for narrow AI; incompatible with general competence by definition.


1.3 External Oracle Provides Relevance

How it escapes: If an external system tells the agent exactly which coordinates to attend to at each step, the agent doesn’t need internal evaluation—it just follows instructions.

What you sacrifice: Autonomy. The agent is no longer generally competent on its own; competence is offloaded to the oracle. The oracle must itself solve the relevance-tracking problem (pushing the requirement up a level).

Example systems:
- Supervised attention (human-in-the-loop pointing)
- Tool-use where tool selection is externally specified
- Scripted agents following pre-planned sequences

Verdict: Valid but displaces the problem rather than solving it.


1.4 Brute-Force Parallelism

How it escapes: If the system runs k parallel copies, each attending to a different coordinate, and aggregates results, it can cover all n coordinates without selection—essentially implementing k ≥ n through parallelism rather than capacity.

What you sacrifice: Efficiency (compute/energy scales with n), and this only works if tasks decompose into independent subproblems. For tasks requiring integrated reasoning across coordinates, parallelism doesn’t help—you still need to combine results, which reintroduces a bottleneck.

Example systems:
- Ensemble methods with independent heads
- Embarrassingly parallel search
- Multi-agent swarms with no coordination requirement

Verdict: Valid for decomposable tasks; fails when integration is required (which triggers globality pressure anyway).


1.5 Predictable Relevance Schedule

How it escapes: If relevance changes but on a perfectly predictable schedule (e.g., “coordinate 1 on odd steps, coordinate 2 on even steps”), the system can precompile the allocation sequence without needing error feedback.

What you sacrifice: Robustness to schedule perturbation. Any deviation from the known schedule causes failure. This is a special case of “no real novelty.”

Example systems:
- Time-multiplexed sensors with fixed duty cycles
- Round-robin attention in predictable environments

Verdict: Degenerates to 1.2 (no novelty) under scrutiny.


2. Escaping Globality (Broadcast)

Theorem escaped: Globality Necessity

Claim: Evaluation must be readable by multiple operators.

2.1 Single Operator (No Coordination Needed)

How it escapes: If the system has only one control variable (one “knob” to turn), there’s nothing to coordinate. Local closure suffices.

What you sacrifice: Behavioral complexity. Single-operator systems can’t exhibit the flexible, multi-faceted control that characterizes general intelligence. You get a thermostat, not an agent.

Example systems:
- Simple feedback controllers (PID)
- Single-reflex organisms
- One-dimensional optimizers

Verdict: Valid but incompatible with the “multiple operators” assumption (A4) that makes general competence interesting.


2.2 Weak Coupling (γ → 0)

How it escapes: If operators’ objectives are nearly independent (coupling strength γ is small), each can optimize locally without cross-module coordination. The coordination error term γ ⋅ Pr [miscoordination] becomes negligible.

What you sacrifice: Integrated behavior. The system acts as a loose federation of independent modules. Tasks requiring tight coordination (planning + memory + action) will fail.

Example systems:
- Modular systems with minimal inter-module dependencies
- “Society of mind” architectures with weak links
- Distributed systems without shared objectives

Verdict: Valid in decoupled domains; breaks down precisely when integration matters.


2.3 Tasks Decompose Cleanly

How it escapes: If every task can be factored into independent subtasks assigned to separate operators with no tradeoffs, there’s no coordination problem to solve.

What you sacrifice: The ability to handle tasks with inherent tradeoffs. Real-world tasks often involve competing constraints (speed vs. accuracy, exploration vs. exploitation, short-term vs. long-term) that require integrated evaluation.

Example systems:
- Pipelines with no feedback between stages
- Assembly lines with independent stations
- Purely hierarchical decompositions

Verdict: Valid for toy domains; unrealistic for complex environments.


2.4 Shared Environment as Implicit Broadcast

How it escapes: If operators can observe each other’s effects through the environment (stigmergy), they may coordinate without an internal broadcast channel.

What you sacrifice: Speed and bandwidth. Environmental coordination is slow and lossy. It also requires the environment to be observable and stable enough to carry the coordination signal—which may not hold under novelty.

Example systems:
- Ant colonies (pheromone trails)
- Blackboard architectures with external memory
- Multi-agent systems with shared world state

Verdict: Partially valid; works for slow coordination but not for the rapid, high-bandwidth integration needed for real-time general competence.


3. Escaping Self-Indexing (Ownership)

Theorem escaped: Self-Indexing Necessity

Claim: Evaluation must be tagged to the responsible internal branch.

3.1 No Branching (Single Trajectory)

How it escapes: If the system never maintains competing internal candidates (no hypotheses, no alternative plans, no exploration), there’s only one trajectory to assign credit to. Ownership is trivial.

What you sacrifice: Flexibility and robustness. Single-trajectory systems can’t hedge bets, explore alternatives, or recover from early mistakes. They’re committed to one path.

Example systems:
- Greedy policies with no lookahead
- Purely reactive controllers
- Systems without working memory for alternatives

Verdict: Valid but severely limits capability. Most interesting cognition involves entertaining alternatives.


3.2 External Credit Assignment

How it escapes: If an external system (trainer, supervisor, environment) provides precise credit signals indicating which internal choice was responsible, the agent doesn’t need to self-index.

What you sacrifice: Autonomy again. The external system must solve the credit assignment problem, which requires it to have access to the agent’s internal branching structure—essentially requiring a more sophisticated external observer.

Example systems:
- Supervised learning with per-action labels
- Human-guided debugging
- Oracle-assisted reinforcement learning

Verdict: Valid but displaces the problem; incompatible with autonomous general competence.


3.3 Stateless Branching (No Persistence)

How it escapes: If branches are explored and immediately discarded (no memory of which branch was taken), there’s nothing to index. Each step is a fresh start.

What you sacrifice: Learning from branching decisions. The system can explore but can’t improve its branching policy over time. It will repeat the same exploration mistakes forever.

Example systems:
- Memoryless Monte Carlo sampling
- Pure random search
- Systems with complete state reset each step

Verdict: Valid but incompatible with learning-based improvement—a core requirement for handling novelty.


3.4 Unique Branch Identification via Structure

How it escapes: If branches have unique structural signatures (different enough that evaluation automatically specifies which branch it applies to), explicit self-indexing may be unnecessary.

What you sacrifice: Scalability. As the number of branches grows, maintaining unique signatures becomes expensive. And if branches are similar (as they often are—“should I retrieve fact A or fact B?”), structural uniqueness fails.

Example systems:
- Systems with highly differentiated action spaces
- Architectures where each branch has distinct neural pathways

Verdict: Partially valid; degrades as branch similarity increases.


4. Compound Escapes (Avoiding the Capstone)

The capstone theorem requires all three properties. To escape it entirely, you can:

Escape Combination What You Get What You Lose
k ≥ n + single operator Unlimited simple control Efficiency, complexity
No novelty + decomposable tasks Narrow specialist Generality, integration
External oracle + external credit Puppet agent Autonomy
Brute parallelism + stigmergy + no branching Reactive swarm Integration, learning, flexibility

Observation: Every complete escape sacrifices something central to what we mean by “bounded general intelligence under novelty.”


5. One-Line Summary

Every escape from Desmocycle necessity trades away something essential to bounded general intelligence: capacity, generality, autonomy, integration, or learning.