An interactive AI flow that gathers information from your codebase or external URLs and adds the findings to your brief before you generate a plan.
Guided Research is a conversational flow you run inside a brief's AI conversation. It walks you through choosing what to research, where to look, and whether to add the findings to your brief. Use it when your brief lacks technical context, you want to ground the plan in how your codebase already works, or you need to pull in information from external documentation before plan generation begins.

Start the flow — In the brief's conversation, tell the AI you want to research a topic. Phrases like "help me research something" or "guide me through research" trigger the guided research flow.
Name your topic — The AI asks what topic you would like to research. This becomes the focus for all subsequent searches.
Choose your sources — The AI asks where to look:
Provide URLs if needed — If you chose URLs or both, the AI asks for the links to analyze. You can provide multiple URLs.
Review the findings — The AI presents the research results organized by source: codebase findings, URL findings, or both. You review everything before anything is written to your brief.
Decide what to add — The AI asks whether you want the findings added to your brief. Choose Yes to have the findings synthesized and written into the brief in a structured format. Choose No to discard the findings without making any changes.
Codebase-aware research: When you search the codebase, the AI looks for existing implementations, patterns, and relevant code that should inform the plan. The findings help the plan agent understand what already exists versus what needs to be built.
URL extraction: The AI fetches and analyzes the content of any web pages you provide — documentation, design specs, API references, competitor analysis — and summarizes the relevant parts in the context of your topic.
Parallel research: When you choose to research both the codebase and URLs, both searches run at the same time, so you get results faster.
Human-in-the-loop confirmation: Findings are presented for review before anything is written to your brief. You retain control over what context goes into the document.
Structured brief update: When you confirm the findings should be added, the AI synthesizes them into a well-organized section covering key points, source connections, and recommendations. It does not dump raw results into the brief.
Beyond the guided research flow, the brief AI conversation supports two additional research modes:
Standard research — Tell the AI to "research the codebase and update the brief" or similar phrases. This runs a broader search across your codebase, organizational documents, and any URL or file you mention in the same message, then writes the findings directly to the brief without a confirmation step.
Deep research — For topics requiring multi-angle investigation, the AI can spawn parallel research threads across the web and your internal knowledge base, then synthesize the findings into a new brief or update the existing one.
Run guided research before generating a plan when your brief describes a feature that touches existing code. The plan agent will use the brief's content during generation, so codebase findings already in the brief result in more accurate, implementation-aware tasks.
You can run guided research multiple times on the same brief, adding findings from different topics or sources in separate passes.
If you want the AI to focus on a specific aspect of the topic, mention it when naming your research topic — for example, "authentication flows in the current codebase" rather than just "authentication."
For documentation-heavy features, providing URLs to the relevant docs (third-party libraries, API references, design systems) gives the plan agent concrete implementation details to work from.