
Performance
Benchmarks
Verified performance data across the MIND stack — compiler, runtime, serialization, and protocol. WebGPU benchmarks run in your browser. Compiler benchmarks are reproducible from source.
GEMM — Matrix Multiplication
WebGPUTiled GEMM on WebGPU (1024–4096). Compares MindLang AOT-compiled WGSL shaders against ONNX Runtime Web's WebGPU backend performing the identical operation. Measures dispatch time, GFLOPS, and optionally includes compile overhead.
Compiler — MIND vs PyTorch 2.10
Criterion.rsFrontend compilation (parse → typecheck → IR) vs PyTorch torch.compile full cold-start pipeline. Ubuntu 24.04, i7-5930K, RTX 3080, CUDA 12.8.
Compiler — MIND vs Mojo 0.26.1
Criterion.rsFrontend compilation vs Mojo mojo build full LLVM compilation to native binary. Same platform, February 2026 verified.
Compiler — MIND vs JAX 0.9
Criterion.rsFrontend compilation vs JAX jax.jit() cold-start XLA compilation (cache disabled). RTX 3080, CUDA 12.8, JAX 0.9.0.1.
MIC Format — Token Efficiency
SerializationMIC (MIND IR Compact) format benchmarked against JSON, TOML, and TOON for AI agent workflows. Measures token count, parse speed, and annual cost at GPT-5.2 pricing.
MAP Protocol — Agent Communication
ProtocolMAP (MIND Agent Protocol) compared against JSON-RPC for AI agent session communication. Measures wire size, token count, and per-session overhead.
WebGPU benchmarks require Chrome 113+ or Edge 113+. Compiler benchmarks reproducible from source. Results vary by hardware.