Taskmaster can run entirely offline using local AI models. This is useful for air-gapped environments, cost-sensitive workflows, or when you prefer to keep data on your machine.
Ollama runs open-source models locally with a simple CLI.
Setup:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama3
# Configure Taskmaster to use it
tm models --set-main ollama/llama3
No API key is needed for local Ollama. If you're connecting to a remote Ollama server, set OLLAMA_API_KEY in your environment.
LM Studio provides a desktop app for downloading and running models locally with zero configuration.
Setup:
tm models --set-main lmstudio/your-model-name
If you have Claude Code installed, Taskmaster can use it directly — no separate API key needed.
tm models --set-main claude-code/sonnet
Claude Code respects your existing configuration (~/.claude/, .claude/, CLAUDE.md) and provides codebase-aware results by analyzing your project structure.
Google's Gemini CLI works with a free Google account.
tm models --set-main gemini-cli/gemini-2.0-flash
Gemini CLI provides structured output support and codebase analysis. A Google Cloud Application (GCA) subscription removes rate limits.
Connect any service that implements the OpenAI API format:
# Set custom endpoint
tm models --set-main openai-compatible/your-model
# Configure the base URL in .env
OPENAI_COMPATIBLE_BASE_URL="http://localhost:8080/v1"
This works with vLLM, text-generation-webui, LocalAI, and other OpenAI-compatible servers.
| Use case | Recommended |
|---|---|
| Best quality, no internet | Ollama with a large model |
| Easy setup, desktop app | LM Studio |
| Already use Claude Code | Claude Code CLI |
| Free, Google account | Gemini CLI |
| Custom infrastructure | OpenAI-compatible endpoint |
Local models may produce lower quality results compared to cloud providers, especially for complex task generation and expansion. Consider using a cloud provider for the main model and a local model as the fallback:
tm models --set-main claude-sonnet-4-20250514
tm models --set-fallback ollama/llama3