FFI & Bindings
MIND provides a Foreign Function Interface (FFI) for integrating compiled models with C, Python, and Rust applications.
Python Bindings
The Python bindings provide a high-level API for loading and running MIND models:
import mind
# Load a compiled model
runtime = mind.Runtime()
model = mind.Model.load("model.mind.bin")
# Run inference
input_tensor = mind.tensor([[1.0, 2.0], [3.0, 4.0]])
output = model.forward(input_tensor)
print(output.numpy())Install with: pip install mind-runtime (coming soon)
C API
The C API provides low-level access for embedded and systems programming:
#include "mind_runtime.h"
int main() {
// Initialize runtime
mind_runtime_t runtime;
mind_runtime_create(&runtime);
// Load model
mind_model_t model;
mind_model_load(&runtime, "model.mind.bin", &model);
// Create input tensor
float input_data[] = {1.0f, 2.0f, 3.0f, 4.0f};
int64_t shape[] = {2, 2};
mind_tensor_t input;
mind_tensor_create(&runtime, input_data, shape, 2, MIND_F32, &input);
// Run inference
mind_tensor_t output;
mind_model_forward(&model, &input, &output);
// Cleanup
mind_tensor_destroy(&output);
mind_tensor_destroy(&input);
mind_model_destroy(&model);
mind_runtime_destroy(&runtime);
return 0;
}Rust Crate
Native Rust integration with the mind-runtime crate:
use mind_runtime::{Runtime, Model, Tensor};
fn main() -> Result<(), mind_runtime::Error> {
// Initialize runtime
let runtime = Runtime::new()?;
// Load model
let model = Model::load(&runtime, "model.mind.bin")?;
// Create input tensor
let input = Tensor::from_slice(&runtime, &[1.0f32, 2.0, 3.0, 4.0], &[2, 2])?;
// Run inference
let output = model.forward(&input)?;
println!("Output: {:?}", output.to_vec::<f32>()?);
Ok(())
}Add to Cargo.toml: mind-runtime = "0.1"
ABI Stability
- The C ABI is stable within major versions
- Binary model format (
.mind.bin) follows semantic versioning - Runtime version must match or exceed model compile version
Exporting from MIND
Export functions from MIND code for FFI consumption:
// In your .mind file
@export
fn predict(input: Tensor<f32, N, 784>) -> Tensor<f32, N, 10> {
// Model implementation
forward(input)
}Learn More
See the full FFI specification at mind-spec/ffi.md.