MIND
MIND LLM PIPELINE VISUALIZATION
Tokens. Attention. Governance. Settlement.
Look down into an LLM inference pipeline. Watch tokens flow through
KV cache, attention heads, speculative decoding, governance gates,
and evidence settlement — rendered as a radial reactor core.
or click anywhere
CONTROLS
G Generate — run inference, watch tokens flow through all pipeline stages
O Overload — flood 1000 requests, observe backpressure + recovery
I Inject Fault — send malformed payload, watch governance block it
V Verify — run parallel streams, confirm bit-exact determinism
N Narration — audio walkthrough of the pipeline (auto-starts)
M Sound — toggle ambient audio   1-4 Speed — animation tempo
GPU: checking... NPU: checking...
mindlang.dev
INFERENCE REACTOR
MIND CORE PIPELINE · VISUALIZATION
FPS -- Cycle 0 IDLE
Mode: IDLE I1✓ I2✓ I3✓ I4✓ I5✓ I6✓ I7✓ I8✓ I9✓ Trace: -- | Load: 0%
RUNTIME LOG
TRACE -- GOV -- FAULT EV 0 EO (0x0000) CALF 1.0000000 Q 0 VER
SIMULATED
TPS
-- t/s
TTFT--
Cache--%
Spec--%
Gov--
EO/s--
LLM OUTPUT