INFERENCE REACTOR
MIND LLM PIPELINE VISUALIZATION
Tokens. Attention. Governance. Settlement.
Look down into an LLM inference pipeline. Watch tokens flow through
KV cache, attention heads, speculative decoding, governance gates,
and evidence settlement — rendered as a radial reactor core.
ENTER REACTOR
or click anywhere
CONTROLS
G Generate — run inference, watch tokens flow through all pipeline stages
O Overload — flood 1000 requests, observe backpressure + recovery
I Inject Fault — send malformed payload, watch governance block it
V Verify — run parallel streams, confirm bit-exact determinism
N Narration — audio walkthrough of the pipeline (auto-starts)
M Sound — toggle ambient audio 1 -4 Speed — animation tempo
GPU: checking...
NPU: checking...
▼ Browser Requirements — WebGPU + NPU
WebGPU — GPU Rendering
🌐
Chrome 113+ ✓ Enabled by default
🔷
Edge 113+ ✓ Enabled by default
🧭
Safari 18+ ✓ Safari 17: Develop → Feature Flags → WebGPU
🦊
Firefox Canvas2D fallback WebGPU behind flag — not yet default
WebNN — NPU Acceleration (experimental)
🌐
Chrome 124+ experimental
chrome://flags/#web-machine-learning-neural-network
Set to "Enabled", relaunch browser
Written in MIND — compiled to WebGPU (GPU), WebNN (NPU), and native ELF (CPU)
Mode: IDLE
I1✓ I2✓ I3✓ I4✓ I5✓ I6✓ I7✓ I8✓ I9✓
Trace: -- | Load: 0%
RUNTIME LOG
TRACE --
│
GOV --
│
FAULT —
│
EV 0 EO (0x0000)
│
CALF 1.0000000
│
Q 0
│
VER —
Mode: IDLE
I1✓ I2✓ I3✓ I4✓ I5✓ I6✓ I7✓ I8✓ I9✓
Trace: -- | Load: 0% | EO: 0
SIMULATED
TTFT --
Cache --%
Spec --%
Gov --
EO/s --