AI models have knowledge cutoffs. The research command queries live web sources and returns current answers with citations, so your tasks are informed by the latest information — not stale training data.
# Ask any question — get a current, cited answer
tm research "best practices for Next.js 15 server actions"
# Research with a specific topic
tm research "Stripe webhook verification Node.js 2025"
Research is powered by your configured research model (typically Perplexity Sonar), which searches the live web and returns answers with source citations.
Don't let findings get lost in terminal history. Save research directly to a task:
# Save findings to subtask 3.4
tm research "Stripe webhook verification" --save-to=3.4
The research output is appended to the task's details, so context travels with the work.
Several Taskmaster commands can use research for better results:
# Complexity analysis with research — scoring informed by current best practices
tm analyze-complexity --research
# Task expansion with research — subtasks based on latest patterns
tm expand --id=5 --research
# Update a task with research-backed context
tm update-task --id=5 --prompt="update for latest API changes" --research
# See current research model
tm models
# Switch to a different research model
tm models --set-research sonar-deep-research
# Use interactive setup
tm models --setup
The research model defaults to Perplexity Sonar. You can switch to Perplexity Sonar Deep Research for more thorough answers, or any other provider that supports internet-connected queries.