File size: 6,529 Bytes
cd657b1 9189d01 bd51b6f 2a64c87 ea566b2 cd657b1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 |
---
language: en
license: mit
tags:
- metacognition
- interpretability
- control-theory
- explainability
- research
- pytorch
- dynamic-inference
- safety
- signal-modeling
model_name: "SCI: Surgical Cognitive Interpreter"
library_name: pytorch
papers:
- https://arxiv.org/abs/2511.12240
---
# SCI: Surgical Cognitive Interpreter
A Metacognitive Control Layer for Signal Dynamics
This repository contains the reference implementation of **SCI**, a closed-loop metacognitive controller that wraps existing models and turns prediction into a regulated process rather than a one-shot function evaluation.
SCI is introduced in:
**Vishal Joshua Meesala**
*SCI: A Metacognitive Control for Signal Dynamics*
arXiv:2511.12240, 2025
https://arxiv.org/abs/2511.12240
The paper formalizes interpretability as a feedback-regulated state: SCI monitors a scalar interpretive signal \( SP(t) \), defined over reliability-weighted, multi-scale features, and adaptively adjusts an interpreter’s parameters to reduce interpretive error
\[
\Delta SP(t) = SP^\*(t) - SP(t),
\]
under Lyapunov-style stability constraints.
---
## 1. Motivation
Most neural networks are deployed as **open-loop function approximators**: they map inputs to outputs in a single forward pass, with no explicit mechanism to regulate computation, explanation quality, or clarification depth.
In safety–critical domains (medicine, industrial monitoring, environmental sensing), this is brittle:
- Easy and ambiguous inputs receive the same computational budget.
- Explanations are static and post hoc, with no adaptation under drift.
- There is no explicit notion of “interpretive error” that can be monitored or controlled.
SCI addresses this by introducing a **closed-loop metacognitive layer** that:
- Monitors a scalar interpretive state \( SP(t) \in [0,1] \).
- Computes interpretive error \( \Delta SP = SP^\* - SP \) relative to a target clarity level.
- Updates interpreter parameters \( \Theta \) according to a Lyapunov-inspired rule with safeguards.
- Allocates more inference steps and adaptation to ambiguous or unstable inputs.
- Exposes \( \Delta SP \) as a **safety signal** for abstention, escalation, or human-in-the-loop review.
Empirically, SCI:
- Allocates **3.6–3.8× more computation** to misclassified inputs than to correct ones.
- Produces an effective scalar safety signal \( \Delta SP \) with **AUROC ≈ 0.70–0.86** for error detection across vision, medical, and industrial benchmarks.
---
## 2. Conceptual Overview
SCI is a modular architecture with four main components.
### 2.1 Decomposition \( \Pi \)
A multi-scale, multimodal feature bank \( P(t, s) \) that organizes raw signals \( X(t) \) into interpretable components:
- Rhythmic components (frequency bands, oscillations)
- Trend components (baselines, drifts)
- Spatial / structural components (sensor topology, modes)
- Cross-modal interactions (coherence, correlation, causal couplings)
- Latent composites \( \Pi^\* \)
Each feature is weighted by a reliability score \( w_f(t) \) derived from:
- Signal-to-noise ratio (SNR)
- Temporal persistence
- Cross-sensor coherence
These weights ensure degraded or untrustworthy features are down-weighted.
---
### 2.2 Interpreter \( \psi_\Theta \)
A knowledge-guided interpreter that maps the reliability-weighted feature bank into:
- **Markers** \( m_k \): human-meaningful states or concepts
- **Confidences** \( p_k(t) \): calibrated probabilities
- **Rationales** \( r_k(t) \): sparse feature-level attributions and/or templated text
This component can be instantiated as a linear or shallow neural head on top of \( P(t, s) \), optionally constrained by domain rules or ontologies.
---
### 2.3 Surgical Precision (SP)
\( SP(t) \in [0,1] \) aggregates calibrated components such as:
- Clarity / selectivity
- Pattern strength
- Domain consistency
- Predictive alignment
In the minimal implementation, \( SP \) is normalized entropy of a marker or predictive distribution:
high SP corresponds to focused, confident internal usage of markers;
low SP corresponds to diffuse or ambiguous internal state.
---
### 2.4 Closed-Loop Controller
The controller monitors \( \Delta SP(t) \) and updates \( \Theta \) when interpretive clarity is insufficient.
\[
\Theta_{t+1} = \text{Proj}_{\mathcal{C}}\left[\Theta_t + \eta_t\left(\Delta SP(t)\nabla_\Theta SP(t) + \lambda_h u_h(t)\right)\right],
\]
where:
- \( \eta_t \): step-size schedule
- \( \lambda_h \): human-gain budget
- \( u_h(t) \): bounded human feedback signal (optional)
- \( \text{Proj}_{\mathcal{C}} \): projection enforcing constraints (trust region, sparsity, parameter bounds)
Lyapunov-style analysis shows that, under suitable conditions on \( \eta_t \) and \( \lambda_h \), the “interpretive energy”
\[
V(t) = \tfrac{1}{2}(\Delta SP(t))^2
\]
decreases monotonically up to bounded noise, so explanations become more stable and consistent over time.
This yields a **reactive interpretability layer** that not only explains but also stabilizes explanations under drift, feedback, and evolving conditions.
---
## 3. Repository Structure
The repository is organized as follows (file names may evolve slightly as the framework matures):
```text
sci/ # Core SCI library
__init__.py
config.py
controller.py # SCIController: closed-loop update over Θ using ΔSP
decomposition.py # Decomposition Π and reliability-weighted feature bank
interpreter.py # Interpreter / marker head and SP computation
reliability.py # Reliability scores (SNR, persistence, coherence)
sp.py # SP scalar and related metrics
utils.py # Shared utilities and helper functions
configs/ # Example configuration files
mnist.yaml
mitbih.yaml
bearings.yaml
examples/ # Jupyter notebooks (to be populated)
mnist_sci_demo.ipynb
ecg_sci_demo.ipynb
bearings_sci_demo.ipynb
experiments/ # Experiment scripts, logs, and analysis
scripts/ # Training utilities, Hub utilities, etc.
push_to_hub.py
run_sci_mitbih_fixed_k.py
run_sci_bearings.py
run_sci_signal_v2.py # Signal-domain SCI experiments
plot_metacognition_hero.py # Plotting script for metacognitive behavior
sc_arxiv.pdf # Paper PDF (for convenience)
sci_latex.tex # LaTeX source of the paper
pyproject.toml
setup.cfg
LICENSE
README.md
|