Installation
Get ModelReins running in under 2 minutes. Install now →
ModelReins turns idle machines into an AI compute mesh. You install a worker on any device — your laptop, a Raspberry Pi, a cloud VM — and it picks up AI jobs from a shared queue, runs them against any of 7 supported providers, and reports results back to a single dashboard.
Think of it as SETI@Home, but for language model inference. Instead of scanning radio signals, your machines process completions, summaries, code reviews, and extractions while you sleep. Jobs flow in, workers pick them up, results flow out.
npm install -g @mediagato/modelreinsmodelreins initmodelreins worker startThat gives you a single worker connected to your default provider. From there, dispatch a job:
modelreins job dispatch --prompt "Summarize this quarter's changelog" --input ./CHANGELOG.mdThe worker picks it up, runs it, and stores the result. Check the dashboard or pull the result from the CLI:
modelreins job result <job-id>Installation
Get ModelReins running in under 2 minutes. Install now →
Providers
Compare all 7 providers — cost, speed, local vs cloud. See providers →
MCP Channel
Plug ModelReins into Claude Code, VS Code, or any MCP client. Set up MCP →
Cost Optimization
Run 312 jobs/week for $1.47. Real numbers, real strategies. Optimize costs →