1 points | by Lbrant 5 hours ago ago
1 comments
I wanted to run AI in my Rust app without sending data to OpenAI. Built Flint — drop it in and run any GGUF model locally on device.
cargo install flint-ai flint use TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF
Runs on Apple Silicon via Metal, NVIDIA via CUDA, AMD via ROCm, or any CPU as fallback. Models store in ~/.flint/models and persist across projects.
Still early (v0.1.0) but works. Would love feedback from anyone who tries it.
I wanted to run AI in my Rust app without sending data to OpenAI. Built Flint — drop it in and run any GGUF model locally on device.
cargo install flint-ai flint use TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF
Runs on Apple Silicon via Metal, NVIDIA via CUDA, AMD via ROCm, or any CPU as fallback. Models store in ~/.flint/models and persist across projects.
Still early (v0.1.0) but works. Would love feedback from anyone who tries it.