Benchmarks Banner

Performance

Benchmarks

Verified performance data across the MIND stack — compiler, runtime, serialization, and protocol. WebGPU benchmarks run in your browser. Compiler benchmarks are reproducible from source.

GEMM — Matrix Multiplication

WebGPU

Tiled GEMM on WebGPU (1024–4096). Compares MindLang AOT-compiled WGSL shaders against ONNX Runtime Web's WebGPU backend performing the identical operation. Measures dispatch time, GFLOPS, and optionally includes compile overhead.

7–19x faster·~4.5 TFLOPS peak·8×4 register tiling·f32 precision

Compiler — MIND vs PyTorch 2.10

Criterion.rs

Frontend compilation (parse → typecheck → IR) vs PyTorch torch.compile full cold-start pipeline. Ubuntu 24.04, i7-5930K, RTX 3080, CUDA 12.8.

35,000–176,000× faster·176,000× conv2d·122,000× simple_mlp·1.8–15.5 µs frontend

Compiler — MIND vs Mojo 0.26.1

Criterion.rs

Frontend compilation vs Mojo mojo build full LLVM compilation to native binary. Same platform, February 2026 verified.

135,000–458,000× faster·458,000× scalar_math·280,000× matmul·135,000× mlp

Compiler — MIND vs JAX 0.9

Criterion.rs

Frontend compilation vs JAX jax.jit() cold-start XLA compilation (cache disabled). RTX 3080, CUDA 12.8, JAX 0.9.0.1.

21,200–95,100× faster·95,100× large_matmul·58,600× simple_mlp·43,100× small_matmul

MIC Format — Token Efficiency

Serialization

MIC (MIND IR Compact) format benchmarked against JSON, TOML, and TOON for AI agent workflows. Measures token count, parse speed, and annual cost at GPT-5.2 pricing.

5.3× fewer tokens vs JSON·$396/yr saved per 1M IRs·2.26 µs parse·52 tokens

MAP Protocol — Agent Communication

Protocol

MAP (MIND Agent Protocol) compared against JSON-RPC for AI agent session communication. Measures wire size, token count, and per-session overhead.

4.3× fewer tokens vs JSON-RPC·77% smaller wire size·234 bytes vs 1,004

WebGPU benchmarks require Chrome 113+ or Edge 113+. Compiler benchmarks reproducible from source. Results vary by hardware.